title
stringlengths 0
125
| url
stringlengths 67
206
| markdown
stringlengths 55
86.1k
| html
stringlengths 198
350k
| crawlDate
stringlengths 24
24
|
---|---|---|---|---|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.rst.txt
|
```
Natural Language Processing (NLP) Tutorials (``torch-neuron``)
==============================================================
* HuggingFace pretrained BERT tutorial :ref:`[html] </src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb>` :pytorch-neuron-src:`[notebook] <bert_tutorial/tutorial_pretrained_bert.ipynb>`
* Bring your own HuggingFace pretrained BERT container to Sagemaker Tutorial :ref:`[html] </src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>` :pytorch-neuron-src:`[notebook] <byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>`
* LibTorch C++ tutorial :ref:`[html] <pytorch-tutorials-libtorch>`
* TorchServe tutorial :ref:`[html] <pytorch-tutorials-torchserve>`
* HuggingFace MarianMT tutorial :ref:`[html] </src/examples/pytorch/transformers-marianmt.ipynb>` :pytorch-neuron-src:`[notebook] <transformers-marianmt.ipynb>`
.. toctree::
:hidden:
/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb
/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/tutorial-libtorch
/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve
/src/examples/pytorch/transformers-marianmt.ipynb
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Natural Language Processing (NLP) Tutorials (``torch-neuron``)
==============================================================
* HuggingFace pretrained BERT tutorial :ref:`[html] </src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb>` :pytorch-neuron-src:`[notebook] <bert_tutorial/tutorial_pretrained_bert.ipynb>`
* Bring your own HuggingFace pretrained BERT container to Sagemaker Tutorial :ref:`[html] </src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>` :pytorch-neuron-src:`[notebook] <byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb>`
* LibTorch C++ tutorial :ref:`[html] <pytorch-tutorials-libtorch>`
* TorchServe tutorial :ref:`[html] <pytorch-tutorials-torchserve>`
* HuggingFace MarianMT tutorial :ref:`[html] </src/examples/pytorch/transformers-marianmt.ipynb>` :pytorch-neuron-src:`[notebook] <transformers-marianmt.ipynb>`
.. toctree::
:hidden:
/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb
/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb
/neuron-guide/neuron-frameworks/pytorch-neuron/tutorials/tutorial-libtorch
/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve
/src/examples/pytorch/transformers-marianmt.ipynb</pre></body></html>
|
2023-09-29T20:54:45.400Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.rst.txt
|
```
API Reference Guide (``torch-neuron``)
======================================
.. toctree::
:maxdepth: 1
:hidden:
PyTorch Neuron trace Python API </frameworks/torch/torch-neuron/api-compilation-python-api>
torch.neuron.DataParallel API </frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api>
/frameworks/torch/torch-neuron/api-core-placement
.. include:: /frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide (``torch-neuron``)
======================================
.. toctree::
:maxdepth: 1
:hidden:
PyTorch Neuron trace Python API </frameworks/torch/torch-neuron/api-compilation-python-api>
torch.neuron.DataParallel API </frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api>
/frameworks/torch/torch-neuron/api-core-placement
.. include:: /frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.txt</pre></body></html>
|
2023-09-29T20:54:45.418Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/developer-guide-torch-neuron.rst.txt
|
```
Developer Guide (``torch-neuron``)
==================================
.. toctree::
:maxdepth: 1
:hidden:
Running Inference on Variable Input Shapes with Bucketing </general/appnotes/torch-neuron/bucketing-app-note>
Data Parallel Inference on PyTorch Neuron </general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note>
/frameworks/torch/torch-neuron/guides/torch-lstm-support
/frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement
.. include:: /frameworks/torch/torch-neuron/developer-guide-torch-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Developer Guide (``torch-neuron``)
==================================
.. toctree::
:maxdepth: 1
:hidden:
Running Inference on Variable Input Shapes with Bucketing </general/appnotes/torch-neuron/bucketing-app-note>
Data Parallel Inference on PyTorch Neuron </general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note>
/frameworks/torch/torch-neuron/guides/torch-lstm-support
/frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement
.. include:: /frameworks/torch/torch-neuron/developer-guide-torch-neuron.txt</pre></body></html>
|
2023-09-29T20:54:45.424Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/api-core-placement.rst.txt
|
```
.. _torch_core_placement_api:
PyTorch Neuron (``torch-neuron``) Core Placement API [Experimental]
===================================================================
.. automodule:: placement
:module-name: torch_neuron.experimental
:members:
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch_core_placement_api:
PyTorch Neuron (``torch-neuron``) Core Placement API [Experimental]
===================================================================
.. automodule:: placement
:module-name: torch_neuron.experimental
:members:
</pre></body></html>
|
2023-09-29T20:54:45.429Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.rst.txt
|
```
.. _api_torch_neuron_dataparallel_api:
torch.neuron.DataParallel API
=============================
The :func:`torch.neuron.DataParallel` Python API implements data parallelism on
:class:`~torch.jit.ScriptModule` models created by the
:ref:`torch_neuron_trace_api`.
This function is analogous to :class:`~torch.nn.DataParallel` in PyTorch.
The :ref:`torch-neuron-dataparallel-app-note` application note provides an
overview of how :func:`torch.neuron.DataParallel` can be used to improve
the performance of inference workloads on Inferentia.
.. py:function:: torch.neuron.DataParallel(model, device_ids=None, dim=0)
Applies data parallelism by replicating the model on
available NeuronCores and distributing data across the different
NeuronCores for parallelized inference.
By default, DataParallel will use all available NeuronCores
allocated for the current process for parallelism. DataParallel will
apply parallelism on ``dim=0`` if ``dim`` is not specified.
DataParallel automatically enables
:ref:`dynamic batching <dynamic_batching_description>` on
eligible models if ``dim=0``. Dynamic batching can be dsiabled using
:func:`torch.neuron.DataParallel.disable_dynamic_batching`.
If dynamic batching is not enabled, the batch size at compilation-time must
be equal to the batch size at inference-time divided by the number of
NeuronCores being used. Specifically, the following must be true when
dynamic batching is disabled:
``input.shape[dim] / len(device_ids) == compilation_input.shape[dim]``.
DataParallel will throw a warning if dynamic batching cannot be enabled.
DataParallel will try load all of a model’s NEFFs onto
a single NeuronCore, only if all of the NEFFs can fit on a single
NeuronCore. DataParallel does not currently support models that
have been compiled with :ref:`neuroncore-pipeline`.
:func:`torch.neuron.DataParallel` requires PyTorch >= 1.8.
*Required Arguments*
:arg ~torch.jit.ScriptModule model: Model created by the
:ref:`torch_neuron_trace_api`
to be parallelized.
*Optional Arguments*
:arg list device_ids: List of :obj:`int` or ``'nc:#'`` that specify the
NeuronCores to use for parallelization (default: all NeuronCores).
Refer to the :ref:`device_ids note <device_ids_note>` for a description
of how ``device_ids`` indexing works.
:arg int dim: Dimension along which the input tensor is scattered across
NeuronCores (default ``dim=0``).
*Attributes*
:arg int num_workers: Number of worker threads used for
multithreaded inference (default: ``2 * number of NeuronCores``).
:arg int split_size: Size of the input chunks
(default: ``max(1, input.shape[dim] // number of NeuronCores)``).
.. py:function:: torch.neuron.DataParallel.disable_dynamic_batching()
Disables automatic dynamic batching on the DataParallel module. See
:ref:`Dynamic batching disabled <dataparallel_example_disable_dynamic_batching_api>`
for example of how DataParallel can be used with dynamic batching disabled.
Use as follows:
>>> model_parallel = torch.neuron.DataParallel(model_neuron)
>>> model_parallel.disable_dynamic_batching()
.. _device_ids_note:
.. note::
``device_ids`` uses per-process NeuronCore granularity and zero-based
indexing. Per-process granularity means that each Python process "sees"
its own view of the world. Specifically, this means that ``device_ids``
only "sees" the NeuronCores that are allocated for the current process.
Zero-based indexing means that each Python process will index its
allocated NeuronCores starting at 0, regardless of the "global" index of
the NeuronCores. Zero-based indexing makes it possible to redeploy the exact
same code unchanged in different process. This behavior is analogous to
the ``device_ids`` argument in the PyTorch
:class:`~torch.nn.DataParallel` function.
As an example, assume DataParallel is run on an inf1.6xlarge, which
contains four Inferentia chips each of which contains four NeuronCores:
* If ``NEURON_RT_VISIBLE_CORES`` is not set, a single process can access
all 16 NeuronCores. Thus specifying ``device_ids=["nc:0"]`` will
correspond to chip0:core0 and ``device_ids=["nc:14"]`` will correspond
to chip3:core2.
* However, if two processes are launched where: process 1 has
``NEURON_RT_VISIBLE_CORES=0-6`` and process 2 has
``NEURON_RT_VISIBLE_CORES=7-15``, ``device_ids=["nc:14"]``
cannot be specified in either process. Instead, chip3:core2 can only be
accessed in process 2. Additionally, chip3:core2 is specified in process 2
with ``device_ids=["nc:7"]``. Furthermore, in process 1,
``device_ids=["nc:0"]`` would correspond to chip0:core0; in process 2
``device_ids=["nc:0"]`` would correspond to chip1:core3.
Examples
--------
The following sections provide example usages of the
:func:`torch.neuron.DataParallel` module.
Default usage
^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-default.rst
Specifying NeuronCores
^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-specify-ncs.rst
DataParallel with dim != 0
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dim-neq-zero.rst
Dynamic batching
^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dynamic-batching.rst
.. _dataparallel_example_disable_dynamic_batching_api:
Dynamic batching disabled
^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-disable-dynamic-batching.rst
Full tutorial with torch.neuron.DataParallel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For an end-to-end tutorial that uses DataParallel, see the
:ref:`PyTorch Resnet Tutorial </src/examples/pytorch/resnet50.ipynb>`.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _api_torch_neuron_dataparallel_api:
torch.neuron.DataParallel API
=============================
The :func:`torch.neuron.DataParallel` Python API implements data parallelism on
:class:`~torch.jit.ScriptModule` models created by the
:ref:`torch_neuron_trace_api`.
This function is analogous to :class:`~torch.nn.DataParallel` in PyTorch.
The :ref:`torch-neuron-dataparallel-app-note` application note provides an
overview of how :func:`torch.neuron.DataParallel` can be used to improve
the performance of inference workloads on Inferentia.
.. py:function:: torch.neuron.DataParallel(model, device_ids=None, dim=0)
Applies data parallelism by replicating the model on
available NeuronCores and distributing data across the different
NeuronCores for parallelized inference.
By default, DataParallel will use all available NeuronCores
allocated for the current process for parallelism. DataParallel will
apply parallelism on ``dim=0`` if ``dim`` is not specified.
DataParallel automatically enables
:ref:`dynamic batching <dynamic_batching_description>` on
eligible models if ``dim=0``. Dynamic batching can be dsiabled using
:func:`torch.neuron.DataParallel.disable_dynamic_batching`.
If dynamic batching is not enabled, the batch size at compilation-time must
be equal to the batch size at inference-time divided by the number of
NeuronCores being used. Specifically, the following must be true when
dynamic batching is disabled:
``input.shape[dim] / len(device_ids) == compilation_input.shape[dim]``.
DataParallel will throw a warning if dynamic batching cannot be enabled.
DataParallel will try load all of a model’s NEFFs onto
a single NeuronCore, only if all of the NEFFs can fit on a single
NeuronCore. DataParallel does not currently support models that
have been compiled with :ref:`neuroncore-pipeline`.
:func:`torch.neuron.DataParallel` requires PyTorch >= 1.8.
*Required Arguments*
:arg ~torch.jit.ScriptModule model: Model created by the
:ref:`torch_neuron_trace_api`
to be parallelized.
*Optional Arguments*
:arg list device_ids: List of :obj:`int` or ``'nc:#'`` that specify the
NeuronCores to use for parallelization (default: all NeuronCores).
Refer to the :ref:`device_ids note <device_ids_note>` for a description
of how ``device_ids`` indexing works.
:arg int dim: Dimension along which the input tensor is scattered across
NeuronCores (default ``dim=0``).
*Attributes*
:arg int num_workers: Number of worker threads used for
multithreaded inference (default: ``2 * number of NeuronCores``).
:arg int split_size: Size of the input chunks
(default: ``max(1, input.shape[dim] // number of NeuronCores)``).
.. py:function:: torch.neuron.DataParallel.disable_dynamic_batching()
Disables automatic dynamic batching on the DataParallel module. See
:ref:`Dynamic batching disabled <dataparallel_example_disable_dynamic_batching_api>`
for example of how DataParallel can be used with dynamic batching disabled.
Use as follows:
>>> model_parallel = torch.neuron.DataParallel(model_neuron)
>>> model_parallel.disable_dynamic_batching()
.. _device_ids_note:
.. note::
``device_ids`` uses per-process NeuronCore granularity and zero-based
indexing. Per-process granularity means that each Python process "sees"
its own view of the world. Specifically, this means that ``device_ids``
only "sees" the NeuronCores that are allocated for the current process.
Zero-based indexing means that each Python process will index its
allocated NeuronCores starting at 0, regardless of the "global" index of
the NeuronCores. Zero-based indexing makes it possible to redeploy the exact
same code unchanged in different process. This behavior is analogous to
the ``device_ids`` argument in the PyTorch
:class:`~torch.nn.DataParallel` function.
As an example, assume DataParallel is run on an inf1.6xlarge, which
contains four Inferentia chips each of which contains four NeuronCores:
* If ``NEURON_RT_VISIBLE_CORES`` is not set, a single process can access
all 16 NeuronCores. Thus specifying ``device_ids=["nc:0"]`` will
correspond to chip0:core0 and ``device_ids=["nc:14"]`` will correspond
to chip3:core2.
* However, if two processes are launched where: process 1 has
``NEURON_RT_VISIBLE_CORES=0-6`` and process 2 has
``NEURON_RT_VISIBLE_CORES=7-15``, ``device_ids=["nc:14"]``
cannot be specified in either process. Instead, chip3:core2 can only be
accessed in process 2. Additionally, chip3:core2 is specified in process 2
with ``device_ids=["nc:7"]``. Furthermore, in process 1,
``device_ids=["nc:0"]`` would correspond to chip0:core0; in process 2
``device_ids=["nc:0"]`` would correspond to chip1:core3.
Examples
--------
The following sections provide example usages of the
:func:`torch.neuron.DataParallel` module.
Default usage
^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-default.rst
Specifying NeuronCores
^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-specify-ncs.rst
DataParallel with dim != 0
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dim-neq-zero.rst
Dynamic batching
^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dynamic-batching.rst
.. _dataparallel_example_disable_dynamic_batching_api:
Dynamic batching disabled
^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-disable-dynamic-batching.rst
Full tutorial with torch.neuron.DataParallel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For an end-to-end tutorial that uses DataParallel, see the
:ref:`PyTorch Resnet Tutorial </src/examples/pytorch/resnet50.ipynb>`.
</pre></body></html>
|
2023-09-29T20:54:45.439Z
|
|
BERT TorchServe Tutorial — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.html#pytorch-tutorials-torchserve
|
# BERT TorchServe Tutorial — AWS Neuron Documentation
_This document is relevant for_: `Inf1`
## BERT TorchServe Tutorial[#](#bert-torchserve-tutorial "Permalink to this headline")
Table of Contents
- [Overview](#overview)
- [Run the tutorial](#run-the-tutorial)
- [Setup TorchServe](#setup-torchserve)
- [Run TorchServe](#run-torchserve)
- [Benchmark TorchServe](#benchmark-torchserve)
## [Overview](#id6)[#](#overview "Permalink to this headline")
This tutorial demonstrates the use of [TorchServe](https://pytorch.org/serve) with Neuron, the SDK for Amazon Inf1 instances. By the end of this tutorial, you will understand how TorchServe can be used to serve a model backed by EC2 Inf1 instances. We will use a pretrained BERT-Base model to determine if one sentence is a paraphrase of another.
## [Run the tutorial](#id7)[#](#run-the-tutorial "Permalink to this headline")
Open a terminal, log into your remote instance, and activate a Pytorch virtual environment setup (see the [Pytorch Installation Guide](../setup/pytorch-install.html#install-neuron-pytorch)). To complete this tutorial, you will need a compiled BERT model. If you have already completed the HuggingFace Pretrained BERT tutorial [\[html\]](../../../../src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb) then you already have the necessary file. Otherwise, you can setup your environment as shown below and then run [`trace_bert_neuron.py`](../../../../_downloads/4640004148fe54855750b60c95066e8c/trace_bert_neuron.py) to obtain a traced BERT model.
You should now have a compiled `bert_neuron_b6.pt` file, which is required going forward.
Open a shell on the instance you prepared earlier, create a new directory named `torchserve`. Copy your compiled model from the previous tutorial into this new directory.
Prepare a new Python virtual environment with the necessary Neuron and TorchServe components. Use a virtual environment to keep (most of) the various tutorial components isolated from the rest of the system in a controlled way.
```
pip install transformers==4.20.1 torchserve==0.7.0 torch-model-archiver==0.7.0 captum==0.6.0
```
Install the system requirements for TorchServe.
Amazon Linux 2 DLAMI Base
```
sudo yum install jq java-11-amazon-corretto-headless
sudo alternatives --config java
sudo alternatives --config javac
```
Ubuntu 20 DLAMI Base
```
sudo apt install openjdk-11-jdk
```
```
openjdk version "11.0.17" 2022-10-18
OpenJDK Runtime Environment (build 11.0.17+8-post-Ubuntu-1ubuntu218.04)
OpenJDK 64-Bit Server VM (build 11.0.17+8-post-Ubuntu-1ubuntu218.04, mixed mode, sharing)
```
Verify that TorchServe is now available.
```
TorchServe Version is 0.7.0
```
## [Setup TorchServe](#id8)[#](#setup-torchserve "Permalink to this headline")
During this tutorial you will need to download a few files onto your instance. The simplest way to accomplish this is to paste the download links provided above each file into a `wget` command. (We don’t provide the links directly because they are subject to change.) For example, right-click and copy the download link for `config.json` shown below.
```
{
"model_name": "bert-base-cased-finetuned-mrpc",
"max_length": 128,
"batch_size": 6
}
```
Now execute the following in your shell:
```
wget <paste link here>
ls
```
```
bert_neuron_b6.pt config.json
```
Download the [custom handler script](https://pytorch.org/serve/custom_service.html) that will eventually respond to inference requests.
```
1import os
2import json
3import sys
4import logging
5from abc import ABC
6
7import torch
8import torch_neuron
9
10from transformers import AutoTokenizer
11from ts.torch_handler.base_handler import BaseHandler
12
13
14# one core per worker
15os.environ['NEURON_RT_NUM_CORES'] = '1'
16
17logger = logging.getLogger(__name__)
18
19class BertEmbeddingHandler(BaseHandler, ABC):
20 """
21 Handler class for Bert Embedding computations.
22 """
23 def __init__(self):
24 super(BertEmbeddingHandler, self).__init__()
25 self.initialized = False
26
27 def initialize(self, ctx):
28 self.manifest = ctx.manifest
29 properties = ctx.system_properties
30 self.device = 'cpu'
31 model_dir = properties.get('model_dir')
32 serialized_file = self.manifest['model']['serializedFile']
33 model_pt_path = os.path.join(model_dir, serialized_file)
34
35 # point sys.path to our config file
36 with open('config.json') as fp:
37 config = json.load(fp)
38 self.max_length = config['max_length']
39 self.batch_size = config['batch_size']
40 self.classes = ['not paraphrase', 'paraphrase']
41
42 self.model = torch.jit.load(model_pt_path)
43 logger.debug(f'Model loaded from {model_dir}')
44 self.model.to(self.device)
45 self.model.eval()
46
47 self.tokenizer = AutoTokenizer.from_pretrained(config['model_name'])
48 self.initialized = True
49
50 def preprocess(self, input_data):
51 """
52 Tokenization pre-processing
53 """
54
55 input_ids = []
56 attention_masks = []
57 token_type_ids = []
58 for row in input_data:
59 seq_0 = row['seq_0'].decode('utf-8')
60 seq_1 = row['seq_1'].decode('utf-8')
61 logger.debug(f'Received text: "{seq_0}", "{seq_1}"')
62
63 inputs = self.tokenizer.encode_plus(
64 seq_0,
65 seq_1,
66 max_length=self.max_length,
67 padding='max_length',
68 truncation=True,
69 return_tensors='pt'
70 )
71
72 input_ids.append(inputs['input_ids'])
73 attention_masks.append(inputs['attention_mask'])
74 token_type_ids.append(inputs['token_type_ids'])
75
76 batch = (torch.cat(input_ids, 0),
77 torch.cat(attention_masks, 0),
78 torch.cat(token_type_ids, 0))
79
80 return batch
81
82 def inference(self, inputs):
83 """
84 Predict the class of a text using a trained transformer model.
85 """
86
87 # sanity check dimensions
88 assert(len(inputs) == 3)
89 num_inferences = len(inputs[0])
90 assert(num_inferences <= self.batch_size)
91
92 # insert padding if we received a partial batch
93 padding = self.batch_size - num_inferences
94 if padding > 0:
95 pad = torch.nn.ConstantPad1d((0, 0, 0, padding), value=0)
96 inputs = [pad(x) for x in inputs]
97
98 outputs = self.model(*inputs)[0]
99 predictions = []
100 for i in range(num_inferences):
101 prediction = self.classes[outputs[i].argmax().item()]
102 predictions.append([prediction])
103 logger.debug("Model predicted: '%s'", prediction)
104 return predictions
105
106 def postprocess(self, inference_output):
107 return inference_output
```
Next, we need to associate the handler script with the compiled model using `torch-model-archiver`. Run the following commands in your terminal:
```
mkdir model_store
MAX_LENGTH=$(jq '.max_length' config.json)
BATCH_SIZE=$(jq '.batch_size' config.json)
MODEL_NAME=bert-max_length$MAX_LENGTH-batch_size$BATCH_SIZE
torch-model-archiver --model-name "$MODEL_NAME" --version 1.0 --serialized-file ./bert_neuron_b6.pt --handler "./handler_bert.py" --extra-files "./config.json" --export-path model_store
```
Note
If you modify your model or a dependency, you will need to rerun the archiver command with the `-f` flag appended to update the archive.
The result of the above will be a `mar` file inside the `model_store` directory.
```
bert-max_length128-batch_size6.mar
```
This file is essentially an archive associated with a fixed version of your model along with its dependencies (e.g. the handler code).
Note
The version specified in the `torch-model-archiver` command can be appended to REST API requests to access a specific version of your model. For example, if your model was hosted locally on port 8080 and named “bert”, the latest version of your model would be available at `http://localhost:8080/predictions/bert`, while version 1.0 would be accessible at `http://localhost:8080/predictions/bert/1.0`. We will see how to perform inference using this API in Step 6.
Create a [custom config](https://pytorch.org/serve/configuration.html) file to set some parameters. This file will be used to configure the server at launch when we run `torchserve --start`.
```
# bind inference API to all network interfaces with SSL enabled
inference_address=http://0.0.0.0:8080
default_workers_per_model=1
```
Note
This will cause TorchServe to bind on all interfaces. For security in real-world applications, you’ll probably want to use port 8443 and [enable SSL](https://pytorch.org/serve/configuration.html#enable-ssl).
## [Run TorchServe](#id9)[#](#run-torchserve "Permalink to this headline")
It’s time to start the server. Typically we’d want to launch this in a separate console, but for this demo we’ll just redirect output to a file.
```
torchserve --start --ncs --model-store model_store --ts-config torchserve.config 2>&1 >torchserve.log
```
Verify that the server seems to have started okay.
```
curl http://127.0.0.1:8080/ping
```
Note
If you get an error when trying to ping the server, you may have tried before the server was fully launched. Check `torchserve.log` for details.
Use the Management API to instruct TorchServe to load our model.
```
$ MAX_BATCH_DELAY=5000 # ms timeout before a partial batch is processed
$ INITIAL_WORKERS=4 # number of models that will be loaded at launch
$ curl -X POST "http://localhost:8081/models?url=$MODEL_NAME.mar&batch_size=$BATCH_SIZE&initial_workers=$INITIAL_WORKERS&max_batch_delay=$MAX_BATCH_DELAY"
```
```
{
"status": "Model \"bert-max_length128-batch_size6\" Version: 1.0 registered with 4 initial workers"
}
```
Note
Any additional attempts to configure the model after the initial curl request will cause the server to return a 409 error. You’ll need to stop/start/configure the server to realize any changes.
The `MAX_BATCH_DELAY` is a timeout value that determines how long to wait before processing a partial batch. This is why the handler code needs to check the batch dimension and potentially add padding. TorchServe will instantiate the number of model handlers indicated by `INITIAL_WORKERS`, so this value controls how many models we will load onto Inferentia in parallel. This tutorial was performed on an inf1.xlarge instance (one Inferentia chip), so there are four NeuronCores available. If you want to control worker scaling more dynamically, [see the docs](https://pytorch.org/serve/management_api.html#scale-workers).
Warning
If you attempt to load more models than NeuronCores available, one of two things will occur. Either the extra models will fit in device memory but performance will suffer, or you will encounter an error on your initial inference. You shouldn’t set `INITIAL_WORKERS` above the number of NeuronCores. However, you may want to use fewer cores if you are using the [NeuronCore Pipeline](../../../../general/arch/neuron-features/neuroncore-pipeline.html#neuroncore-pipeline) feature.
It looks like everything is running successfully at this point, so it’s time for an inference.
Create the `infer_bert.py` file below on your instance.
```
1import json
2import concurrent.futures
3import requests
4
5with open('config.json') as fp:
6 config = json.load(fp)
7max_length = config['max_length']
8batch_size = config['batch_size']
9name = f'bert-max_length{max_length}-batch_size{batch_size}'
10
11# dispatch requests in parallel
12url = f'http://localhost:8080/predictions/{name}'
13paraphrase = {'seq_0': "HuggingFace's headquarters are situated in Manhattan",
14 'seq_1': "The company HuggingFace is based in New York City"}
15not_paraphrase = {'seq_0': paraphrase['seq_0'], 'seq_1': 'This is total nonsense.'}
16
17with concurrent.futures.ThreadPoolExecutor(max_workers=batch_size) as executor:
18 def worker_thread(worker_index):
19 # we'll send half the requests as not_paraphrase examples for sanity
20 data = paraphrase if worker_index < batch_size//2 else not_paraphrase
21 response = requests.post(url, data=data)
22 print(worker_index, response.json())
23
24 for worker_index in range(batch_size):
25 executor.submit(worker_thread, worker_index)
```
This script will send a `batch_size` number of requests to our model. In this example, we are using a model that estimates the probability that one sentence is a paraphrase of another. The script sends positive examples in the first half of the batch and negative examples in the second half.
Execute the script in your terminal.
```
1 ['paraphrase']
3 ['not paraphrase']
4 ['not paraphrase']
0 ['paraphrase']
5 ['not paraphrase']
2 ['paraphrase']
```
We can see that the first three threads (0, 1, 2) all report `paraphrase`, as expected. If we instead modify the script to send an incomplete batch and then wait for the timeout to expire, the excess padding results will be discarded.
## [Benchmark TorchServe](#id10)[#](#benchmark-torchserve "Permalink to this headline")
We’ve seen how to perform a single batched inference, but how many inferences can we process per second? A separate upcoming tutorial will document performance tuning to maximize throughput. In the meantime, we can still perform a simple naïve stress test. The code below will spawn 64 worker threads, with each thread repeatedly sending a full batch of data to process. A separate thread will periodically print throughput and latency measurements.
```
1import os
2import argparse
3import time
4import numpy as np
5import requests
6import sys
7from concurrent import futures
8
9import torch
10
11
12parser = argparse.ArgumentParser()
13parser.add_argument('--url', help='Torchserve model URL', type=str, default=f'http://127.0.0.1:8080/predictions/bert-max_length128-batch_size6')
14parser.add_argument('--num_thread', type=int, default=64, help='Number of threads invoking the model URL')
15parser.add_argument('--batch_size', type=int, default=6)
16parser.add_argument('--sequence_length', type=int, default=128)
17parser.add_argument('--latency_window_size', type=int, default=1000)
18parser.add_argument('--throughput_time', type=int, default=300)
19parser.add_argument('--throughput_interval', type=int, default=10)
20args = parser.parse_args()
21
22data = { 'seq_0': 'A completely made up sentence.',
23 'seq_1': 'Well, I suppose they are all made up.' }
24live = True
25num_infer = 0
26latency_list = []
27
28
29def one_thread(pred, feed_data):
30 global latency_list
31 global num_infer
32 global live
33 session = requests.Session()
34 while True:
35 start = time.time()
36 result = session.post(pred, data=feed_data)
37 latency = time.time() - start
38 latency_list.append(latency)
39 num_infer += 1
40 if not live:
41 break
42
43
44def current_performance():
45 last_num_infer = num_infer
46 for _ in range(args.throughput_time // args.throughput_interval):
47 current_num_infer = num_infer
48 throughput = (current_num_infer - last_num_infer) / args.throughput_interval
49 p50 = 0.0
50 p90 = 0.0
51 if latency_list:
52 p50 = np.percentile(latency_list[-args.latency_window_size:], 50)
53 p90 = np.percentile(latency_list[-args.latency_window_size:], 90)
54 print('pid {}: current throughput {}, latency p50={:.3f} p90={:.3f}'.format(os.getpid(), throughput, p50, p90))
55 sys.stdout.flush()
56 last_num_infer = current_num_infer
57 time.sleep(args.throughput_interval)
58 global live
59 live = False
60
61
62with futures.ThreadPoolExecutor(max_workers=args.num_thread+1) as executor:
63 executor.submit(current_performance)
64 for _ in range(args.num_thread):
65 executor.submit(one_thread, args.url, data)
```
Run the benchmarking script.
```
pid 28523: current throughput 0.0, latency p50=0.000 p90=0.000
pid 28523: current throughput 617.7, latency p50=0.092 p90=0.156
pid 28523: current throughput 697.3, latency p50=0.082 p90=0.154
pid 28523: current throughput 702.8, latency p50=0.081 p90=0.149
pid 28523: current throughput 699.1, latency p50=0.085 p90=0.147
pid 28523: current throughput 703.8, latency p50=0.083 p90=0.148
pid 28523: current throughput 699.3, latency p50=0.083 p90=0.148
...
```
**Congratulations!** By now you should have successfully served a batched model over TorchServe.
You can now shutdown torchserve.
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>BERT TorchServe Tutorial — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="Transformers MarianMT Tutorial" href="../../../../src/examples/pytorch/transformers-marianmt.html">
<link rel="prev" title="Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container" href="../../../../src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuron/tutorials/tutorial-torchserve", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../index.html">
PyTorch Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../inference-torch-neuron.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4">
<a class="reference internal" href="tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4 current active">
<a class="reference internal" href="tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuron/tutorials/tutorial-torchserve.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#overview">
Overview
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#run-the-tutorial">
Run the tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup-torchserve">
Setup TorchServe
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#run-torchserve">
Run TorchServe
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#benchmark-torchserve">
Benchmark TorchServe
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>BERT TorchServe Tutorial</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#overview">
Overview
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#run-the-tutorial">
Run the tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup-torchserve">
Setup TorchServe
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#run-torchserve">
Run TorchServe
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#benchmark-torchserve">
Benchmark TorchServe
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="bert-torchserve-tutorial">
<span id="pytorch-tutorials-torchserve"></span><h1>BERT TorchServe Tutorial<a class="headerlink" href="#bert-torchserve-tutorial" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of Contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#overview" id="id6">Overview</a></p></li>
<li><p><a class="reference internal" href="#run-the-tutorial" id="id7">Run the tutorial</a></p></li>
<li><p><a class="reference internal" href="#setup-torchserve" id="id8">Setup TorchServe</a></p></li>
<li><p><a class="reference internal" href="#run-torchserve" id="id9">Run TorchServe</a></p></li>
<li><p><a class="reference internal" href="#benchmark-torchserve" id="id10">Benchmark TorchServe</a></p></li>
</ul>
</div>
<div class="section" id="overview">
<h2><a class="toc-backref" href="#id6">Overview</a><a class="headerlink" href="#overview" title="Permalink to this headline">#</a></h2>
<p>This tutorial demonstrates the use of <a class="reference external" href="https://pytorch.org/serve">TorchServe</a> with Neuron, the SDK for Amazon Inf1 instances. By the end of this tutorial, you will understand how TorchServe can be used to serve a model backed by EC2 Inf1 instances. We will use a pretrained BERT-Base model to determine if one sentence is a paraphrase of another.</p>
</div>
<div class="section" id="run-the-tutorial">
<span id="torchserve-compile"></span><h2><a class="toc-backref" href="#id7">Run the tutorial</a><a class="headerlink" href="#run-the-tutorial" title="Permalink to this headline">#</a></h2>
<p>Open a terminal, log into your remote instance, and activate a Pytorch virtual environment setup (see the <a class="reference internal" href="../setup/pytorch-install.html#install-neuron-pytorch"><span class="std std-ref">Pytorch Installation Guide</span></a>). To complete this tutorial, you will need a compiled BERT model. If you have already completed the HuggingFace Pretrained BERT tutorial <a class="reference internal" href="../../../../src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/bert_tutorial/tutorial_pretrained_bert.ipynb">[notebook]</a> then you already have the necessary file. Otherwise, you can setup your environment as shown below and then run <a class="reference download internal" download="" href="../../../../_downloads/4640004148fe54855750b60c95066e8c/trace_bert_neuron.py"><code class="xref download docutils literal notranslate"><span class="pre">trace_bert_neuron.py</span></code></a> to obtain a traced BERT model.</p>
<p>You should now have a compiled <code class="docutils literal notranslate"><span class="pre">bert_neuron_b6.pt</span></code> file, which is required going forward.</p>
<p>Open a shell on the instance you prepared earlier, create a new directory named <code class="docutils literal notranslate"><span class="pre">torchserve</span></code>. Copy your compiled model from the previous tutorial into this new directory.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">cd</span><span class="w"> </span>torchserve
ls
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">bert_neuron_b6</span><span class="o">.</span><span class="n">pt</span>
</pre></div>
</div>
<p>Prepare a new Python virtual environment with the necessary Neuron and TorchServe components. Use a virtual environment to keep (most of) the various tutorial components isolated from the rest of the system in a controlled way.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip<span class="w"> </span>install<span class="w"> </span><span class="nv">transformers</span><span class="o">==</span><span class="m">4</span>.20.1<span class="w"> </span><span class="nv">torchserve</span><span class="o">==</span><span class="m">0</span>.7.0<span class="w"> </span>torch-model-archiver<span class="o">==</span><span class="m">0</span>.7.0<span class="w"> </span><span class="nv">captum</span><span class="o">==</span><span class="m">0</span>.6.0
</pre></div>
</div>
<p>Install the system requirements for TorchServe.</p>
<div class="sd-tab-set docutils">
<input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio">
<label class="sd-tab-label" for="sd-tab-item-0">
Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils">
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>yum<span class="w"> </span>install<span class="w"> </span>jq<span class="w"> </span>java-11-amazon-corretto-headless
sudo<span class="w"> </span>alternatives<span class="w"> </span>--config<span class="w"> </span>java
sudo<span class="w"> </span>alternatives<span class="w"> </span>--config<span class="w"> </span>javac
</pre></div>
</div>
</div>
<input id="sd-tab-item-1" name="sd-tab-set-0" type="radio">
<label class="sd-tab-label" for="sd-tab-item-1">
Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils">
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>apt<span class="w"> </span>install<span class="w"> </span>openjdk-11-jdk
</pre></div>
</div>
</div>
</div>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>java<span class="w"> </span>-version
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">openjdk</span> <span class="n">version</span> <span class="s2">"11.0.17"</span> <span class="mi">2022</span><span class="o">-</span><span class="mi">10</span><span class="o">-</span><span class="mi">18</span>
<span class="n">OpenJDK</span> <span class="n">Runtime</span> <span class="n">Environment</span> <span class="p">(</span><span class="n">build</span> <span class="mf">11.0.17</span><span class="o">+</span><span class="mi">8</span><span class="o">-</span><span class="n">post</span><span class="o">-</span><span class="n">Ubuntu</span><span class="o">-</span><span class="mi">1</span><span class="n">ubuntu218</span><span class="mf">.04</span><span class="p">)</span>
<span class="n">OpenJDK</span> <span class="mi">64</span><span class="o">-</span><span class="n">Bit</span> <span class="n">Server</span> <span class="n">VM</span> <span class="p">(</span><span class="n">build</span> <span class="mf">11.0.17</span><span class="o">+</span><span class="mi">8</span><span class="o">-</span><span class="n">post</span><span class="o">-</span><span class="n">Ubuntu</span><span class="o">-</span><span class="mi">1</span><span class="n">ubuntu218</span><span class="mf">.04</span><span class="p">,</span> <span class="n">mixed</span> <span class="n">mode</span><span class="p">,</span> <span class="n">sharing</span><span class="p">)</span>
</pre></div>
</div>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>javac<span class="w"> </span>-version
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">javac</span> <span class="mf">11.0.17</span>
</pre></div>
</div>
<p>Verify that TorchServe is now available.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>torchserve<span class="w"> </span>--version
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">TorchServe</span> <span class="n">Version</span> <span class="ow">is</span> <span class="mf">0.7.0</span>
</pre></div>
</div>
</div>
<div class="section" id="setup-torchserve">
<span id="torchserve-setup"></span><h2><a class="toc-backref" href="#id8">Setup TorchServe</a><a class="headerlink" href="#setup-torchserve" title="Permalink to this headline">#</a></h2>
<p>During this tutorial you will need to download a few files onto your instance. The simplest way to accomplish this is to paste the download links provided above each file into a <code class="docutils literal notranslate"><span class="pre">wget</span></code> command. (We don’t provide the links directly because they are subject to change.) For example, right-click and copy the download link for <code class="docutils literal notranslate"><span class="pre">config.json</span></code> shown below.</p>
<div class="literal-block-wrapper docutils container" id="id1">
<div class="code-block-caption"><span class="caption-text"><a class="reference download internal" download="" href="../../../../_downloads/cbeed28d83ac18dfc474cca0e8fd46bd/config.json"><code class="xref download docutils literal notranslate"><span class="pre">config.json</span></code></a></span><a class="headerlink" href="#id1" title="Permalink to this code">#</a></div>
<div class="highlight-JSON notranslate"><div class="highlight"><pre><span></span><span class="p">{</span>
<span class="w"> </span><span class="nt">"model_name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"bert-base-cased-finetuned-mrpc"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"max_length"</span><span class="p">:</span><span class="w"> </span><span class="mi">128</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"batch_size"</span><span class="p">:</span><span class="w"> </span><span class="mi">6</span>
<span class="p">}</span>
</pre></div>
</div>
</div>
<p>Now execute the following in your shell:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>wget<span class="w"> </span><paste<span class="w"> </span>link<span class="w"> </span>here>
ls
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">bert_neuron_b6</span><span class="o">.</span><span class="n">pt</span> <span class="n">config</span><span class="o">.</span><span class="n">json</span>
</pre></div>
</div>
<p>Download the <a class="reference external" href="https://pytorch.org/serve/custom_service.html">custom handler script</a> that will eventually respond to inference requests.</p>
<div class="literal-block-wrapper docutils container" id="id2">
<div class="code-block-caption"><span class="caption-text"><a class="reference download internal" download="" href="../../../../_downloads/9c91f3c0268c772a942b742486b8c90d/handler_bert.py"><code class="xref download docutils literal notranslate"><span class="pre">handler_bert.py</span></code></a></span><a class="headerlink" href="#id2" title="Permalink to this code">#</a></div>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="linenos"> 1</span><span class="kn">import</span> <span class="nn">os</span>
<span class="linenos"> 2</span><span class="kn">import</span> <span class="nn">json</span>
<span class="linenos"> 3</span><span class="kn">import</span> <span class="nn">sys</span>
<span class="linenos"> 4</span><span class="kn">import</span> <span class="nn">logging</span>
<span class="linenos"> 5</span><span class="kn">from</span> <span class="nn">abc</span> <span class="kn">import</span> <span class="n">ABC</span>
<span class="linenos"> 6</span>
<span class="linenos"> 7</span><span class="kn">import</span> <span class="nn">torch</span>
<span class="linenos"> 8</span><span class="kn">import</span> <span class="nn">torch_neuron</span>
<span class="linenos"> 9</span>
<span class="linenos"> 10</span><span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">AutoTokenizer</span>
<span class="linenos"> 11</span><span class="kn">from</span> <span class="nn">ts.torch_handler.base_handler</span> <span class="kn">import</span> <span class="n">BaseHandler</span>
<span class="linenos"> 12</span>
<span class="linenos"> 13</span>
<span class="linenos"> 14</span><span class="c1"># one core per worker</span>
<span class="linenos"> 15</span><span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s1">'NEURON_RT_NUM_CORES'</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'1'</span>
<span class="linenos"> 16</span>
<span class="linenos"> 17</span><span class="n">logger</span> <span class="o">=</span> <span class="n">logging</span><span class="o">.</span><span class="n">getLogger</span><span class="p">(</span><span class="vm">__name__</span><span class="p">)</span>
<span class="linenos"> 18</span>
<span class="linenos"> 19</span><span class="k">class</span> <span class="nc">BertEmbeddingHandler</span><span class="p">(</span><span class="n">BaseHandler</span><span class="p">,</span> <span class="n">ABC</span><span class="p">):</span>
<span class="linenos"> 20</span><span class="w"> </span><span class="sd">"""</span>
<span class="linenos"> 21</span><span class="sd"> Handler class for Bert Embedding computations.</span>
<span class="linenos"> 22</span><span class="sd"> """</span>
<span class="linenos"> 23</span> <span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="linenos"> 24</span> <span class="nb">super</span><span class="p">(</span><span class="n">BertEmbeddingHandler</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="linenos"> 25</span> <span class="bp">self</span><span class="o">.</span><span class="n">initialized</span> <span class="o">=</span> <span class="kc">False</span>
<span class="linenos"> 26</span>
<span class="linenos"> 27</span> <span class="k">def</span> <span class="nf">initialize</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">ctx</span><span class="p">):</span>
<span class="linenos"> 28</span> <span class="bp">self</span><span class="o">.</span><span class="n">manifest</span> <span class="o">=</span> <span class="n">ctx</span><span class="o">.</span><span class="n">manifest</span>
<span class="linenos"> 29</span> <span class="n">properties</span> <span class="o">=</span> <span class="n">ctx</span><span class="o">.</span><span class="n">system_properties</span>
<span class="linenos"> 30</span> <span class="bp">self</span><span class="o">.</span><span class="n">device</span> <span class="o">=</span> <span class="s1">'cpu'</span>
<span class="linenos"> 31</span> <span class="n">model_dir</span> <span class="o">=</span> <span class="n">properties</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s1">'model_dir'</span><span class="p">)</span>
<span class="linenos"> 32</span> <span class="n">serialized_file</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">manifest</span><span class="p">[</span><span class="s1">'model'</span><span class="p">][</span><span class="s1">'serializedFile'</span><span class="p">]</span>
<span class="linenos"> 33</span> <span class="n">model_pt_path</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">model_dir</span><span class="p">,</span> <span class="n">serialized_file</span><span class="p">)</span>
<span class="linenos"> 34</span>
<span class="linenos"> 35</span> <span class="c1"># point sys.path to our config file</span>
<span class="linenos"> 36</span> <span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'config.json'</span><span class="p">)</span> <span class="k">as</span> <span class="n">fp</span><span class="p">:</span>
<span class="linenos"> 37</span> <span class="n">config</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">fp</span><span class="p">)</span>
<span class="linenos"> 38</span> <span class="bp">self</span><span class="o">.</span><span class="n">max_length</span> <span class="o">=</span> <span class="n">config</span><span class="p">[</span><span class="s1">'max_length'</span><span class="p">]</span>
<span class="linenos"> 39</span> <span class="bp">self</span><span class="o">.</span><span class="n">batch_size</span> <span class="o">=</span> <span class="n">config</span><span class="p">[</span><span class="s1">'batch_size'</span><span class="p">]</span>
<span class="linenos"> 40</span> <span class="bp">self</span><span class="o">.</span><span class="n">classes</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'not paraphrase'</span><span class="p">,</span> <span class="s1">'paraphrase'</span><span class="p">]</span>
<span class="linenos"> 41</span>
<span class="linenos"> 42</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">model_pt_path</span><span class="p">)</span>
<span class="linenos"> 43</span> <span class="n">logger</span><span class="o">.</span><span class="n">debug</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Model loaded from </span><span class="si">{</span><span class="n">model_dir</span><span class="si">}</span><span class="s1">'</span><span class="p">)</span>
<span class="linenos"> 44</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">device</span><span class="p">)</span>
<span class="linenos"> 45</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
<span class="linenos"> 46</span>
<span class="linenos"> 47</span> <span class="bp">self</span><span class="o">.</span><span class="n">tokenizer</span> <span class="o">=</span> <span class="n">AutoTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">config</span><span class="p">[</span><span class="s1">'model_name'</span><span class="p">])</span>
<span class="linenos"> 48</span> <span class="bp">self</span><span class="o">.</span><span class="n">initialized</span> <span class="o">=</span> <span class="kc">True</span>
<span class="linenos"> 49</span>
<span class="linenos"> 50</span> <span class="k">def</span> <span class="nf">preprocess</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_data</span><span class="p">):</span>
<span class="linenos"> 51</span><span class="w"> </span><span class="sd">"""</span>
<span class="linenos"> 52</span><span class="sd"> Tokenization pre-processing</span>
<span class="linenos"> 53</span><span class="sd"> """</span>
<span class="linenos"> 54</span>
<span class="linenos"> 55</span> <span class="n">input_ids</span> <span class="o">=</span> <span class="p">[]</span>
<span class="linenos"> 56</span> <span class="n">attention_masks</span> <span class="o">=</span> <span class="p">[]</span>
<span class="linenos"> 57</span> <span class="n">token_type_ids</span> <span class="o">=</span> <span class="p">[]</span>
<span class="linenos"> 58</span> <span class="k">for</span> <span class="n">row</span> <span class="ow">in</span> <span class="n">input_data</span><span class="p">:</span>
<span class="linenos"> 59</span> <span class="n">seq_0</span> <span class="o">=</span> <span class="n">row</span><span class="p">[</span><span class="s1">'seq_0'</span><span class="p">]</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="s1">'utf-8'</span><span class="p">)</span>
<span class="linenos"> 60</span> <span class="n">seq_1</span> <span class="o">=</span> <span class="n">row</span><span class="p">[</span><span class="s1">'seq_1'</span><span class="p">]</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="s1">'utf-8'</span><span class="p">)</span>
<span class="linenos"> 61</span> <span class="n">logger</span><span class="o">.</span><span class="n">debug</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Received text: "</span><span class="si">{</span><span class="n">seq_0</span><span class="si">}</span><span class="s1">", "</span><span class="si">{</span><span class="n">seq_1</span><span class="si">}</span><span class="s1">"'</span><span class="p">)</span>
<span class="linenos"> 62</span>
<span class="linenos"> 63</span> <span class="n">inputs</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">tokenizer</span><span class="o">.</span><span class="n">encode_plus</span><span class="p">(</span>
<span class="linenos"> 64</span> <span class="n">seq_0</span><span class="p">,</span>
<span class="linenos"> 65</span> <span class="n">seq_1</span><span class="p">,</span>
<span class="linenos"> 66</span> <span class="n">max_length</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">max_length</span><span class="p">,</span>
<span class="linenos"> 67</span> <span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span>
<span class="linenos"> 68</span> <span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="linenos"> 69</span> <span class="n">return_tensors</span><span class="o">=</span><span class="s1">'pt'</span>
<span class="linenos"> 70</span> <span class="p">)</span>
<span class="linenos"> 71</span>
<span class="linenos"> 72</span> <span class="n">input_ids</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">])</span>
<span class="linenos"> 73</span> <span class="n">attention_masks</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">])</span>
<span class="linenos"> 74</span> <span class="n">token_type_ids</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'token_type_ids'</span><span class="p">])</span>
<span class="linenos"> 75</span>
<span class="linenos"> 76</span> <span class="n">batch</span> <span class="o">=</span> <span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">cat</span><span class="p">(</span><span class="n">input_ids</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
<span class="linenos"> 77</span> <span class="n">torch</span><span class="o">.</span><span class="n">cat</span><span class="p">(</span><span class="n">attention_masks</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span>
<span class="linenos"> 78</span> <span class="n">torch</span><span class="o">.</span><span class="n">cat</span><span class="p">(</span><span class="n">token_type_ids</span><span class="p">,</span> <span class="mi">0</span><span class="p">))</span>
<span class="linenos"> 79</span>
<span class="linenos"> 80</span> <span class="k">return</span> <span class="n">batch</span>
<span class="linenos"> 81</span>
<span class="linenos"> 82</span> <span class="k">def</span> <span class="nf">inference</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">inputs</span><span class="p">):</span>
<span class="linenos"> 83</span><span class="w"> </span><span class="sd">"""</span>
<span class="linenos"> 84</span><span class="sd"> Predict the class of a text using a trained transformer model.</span>
<span class="linenos"> 85</span><span class="sd"> """</span>
<span class="linenos"> 86</span>
<span class="linenos"> 87</span> <span class="c1"># sanity check dimensions</span>
<span class="linenos"> 88</span> <span class="k">assert</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span> <span class="o">==</span> <span class="mi">3</span><span class="p">)</span>
<span class="linenos"> 89</span> <span class="n">num_inferences</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="linenos"> 90</span> <span class="k">assert</span><span class="p">(</span><span class="n">num_inferences</span> <span class="o"><=</span> <span class="bp">self</span><span class="o">.</span><span class="n">batch_size</span><span class="p">)</span>
<span class="linenos"> 91</span>
<span class="linenos"> 92</span> <span class="c1"># insert padding if we received a partial batch</span>
<span class="linenos"> 93</span> <span class="n">padding</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">batch_size</span> <span class="o">-</span> <span class="n">num_inferences</span>
<span class="linenos"> 94</span> <span class="k">if</span> <span class="n">padding</span> <span class="o">></span> <span class="mi">0</span><span class="p">:</span>
<span class="linenos"> 95</span> <span class="n">pad</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">ConstantPad1d</span><span class="p">((</span><span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">padding</span><span class="p">),</span> <span class="n">value</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
<span class="linenos"> 96</span> <span class="n">inputs</span> <span class="o">=</span> <span class="p">[</span><span class="n">pad</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">inputs</span><span class="p">]</span>
<span class="linenos"> 97</span>
<span class="linenos"> 98</span> <span class="n">outputs</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="p">(</span><span class="o">*</span><span class="n">inputs</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span>
<span class="linenos"> 99</span> <span class="n">predictions</span> <span class="o">=</span> <span class="p">[]</span>
<span class="linenos">100</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_inferences</span><span class="p">):</span>
<span class="linenos">101</span> <span class="n">prediction</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">classes</span><span class="p">[</span><span class="n">outputs</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">.</span><span class="n">argmax</span><span class="p">()</span><span class="o">.</span><span class="n">item</span><span class="p">()]</span>
<span class="linenos">102</span> <span class="n">predictions</span><span class="o">.</span><span class="n">append</span><span class="p">([</span><span class="n">prediction</span><span class="p">])</span>
<span class="linenos">103</span> <span class="n">logger</span><span class="o">.</span><span class="n">debug</span><span class="p">(</span><span class="s2">"Model predicted: '</span><span class="si">%s</span><span class="s2">'"</span><span class="p">,</span> <span class="n">prediction</span><span class="p">)</span>
<span class="linenos">104</span> <span class="k">return</span> <span class="n">predictions</span>
<span class="linenos">105</span>
<span class="linenos">106</span> <span class="k">def</span> <span class="nf">postprocess</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">inference_output</span><span class="p">):</span>
<span class="linenos">107</span> <span class="k">return</span> <span class="n">inference_output</span>
</pre></div>
</div>
</div>
<p>Next, we need to associate the handler script with the compiled model using <code class="docutils literal notranslate"><span class="pre">torch-model-archiver</span></code>. Run the following commands in your terminal:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>mkdir<span class="w"> </span>model_store
<span class="nv">MAX_LENGTH</span><span class="o">=</span><span class="k">$(</span>jq<span class="w"> </span><span class="s1">'.max_length'</span><span class="w"> </span>config.json<span class="k">)</span>
<span class="nv">BATCH_SIZE</span><span class="o">=</span><span class="k">$(</span>jq<span class="w"> </span><span class="s1">'.batch_size'</span><span class="w"> </span>config.json<span class="k">)</span>
<span class="nv">MODEL_NAME</span><span class="o">=</span>bert-max_length<span class="nv">$MAX_LENGTH</span>-batch_size<span class="nv">$BATCH_SIZE</span>
torch-model-archiver<span class="w"> </span>--model-name<span class="w"> </span><span class="s2">"</span><span class="nv">$MODEL_NAME</span><span class="s2">"</span><span class="w"> </span>--version<span class="w"> </span><span class="m">1</span>.0<span class="w"> </span>--serialized-file<span class="w"> </span>./bert_neuron_b6.pt<span class="w"> </span>--handler<span class="w"> </span><span class="s2">"./handler_bert.py"</span><span class="w"> </span>--extra-files<span class="w"> </span><span class="s2">"./config.json"</span><span class="w"> </span>--export-path<span class="w"> </span>model_store
</pre></div>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>If you modify your model or a dependency, you will need to rerun the archiver command with the <code class="docutils literal notranslate"><span class="pre">-f</span></code> flag appended to update the archive.</p>
</div>
<p>The result of the above will be a <code class="docutils literal notranslate"><span class="pre">mar</span></code> file inside the <code class="docutils literal notranslate"><span class="pre">model_store</span></code> directory.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>ls<span class="w"> </span>model_store
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">bert</span><span class="o">-</span><span class="n">max_length128</span><span class="o">-</span><span class="n">batch_size6</span><span class="o">.</span><span class="n">mar</span>
</pre></div>
</div>
<p>This file is essentially an archive associated with a fixed version of your model along with its dependencies (e.g. the handler code).</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>The version specified in the <code class="docutils literal notranslate"><span class="pre">torch-model-archiver</span></code> command can be appended to REST API requests to access a specific version of your model. For example, if your model was hosted locally on port 8080 and named “bert”, the latest version of your model would be available at <code class="docutils literal notranslate"><span class="pre">http://localhost:8080/predictions/bert</span></code>, while version 1.0 would be accessible at <code class="docutils literal notranslate"><span class="pre">http://localhost:8080/predictions/bert/1.0</span></code>. We will see how to perform inference using this API in Step 6.</p>
</div>
<p>Create a <a class="reference external" href="https://pytorch.org/serve/configuration.html">custom config</a> file to set some parameters. This file will be used to configure the server at launch when we run <code class="docutils literal notranslate"><span class="pre">torchserve</span> <span class="pre">--start</span></code>.</p>
<div class="literal-block-wrapper docutils container" id="id3">
<div class="code-block-caption"><span class="caption-text"><a class="reference download internal" download="" href="../../../../_downloads/47f3a2a02c39985f3510bf499227b636/torchserve.config"><code class="xref download docutils literal notranslate"><span class="pre">torchserve.config</span></code></a></span><a class="headerlink" href="#id3" title="Permalink to this code">#</a></div>
<div class="highlight-properties notranslate"><div class="highlight"><pre><span></span><span class="c1"># bind inference API to all network interfaces with SSL enabled</span>
<span class="na">inference_address</span><span class="o">=</span><span class="s">http://0.0.0.0:8080</span>
<span class="na">default_workers_per_model</span><span class="o">=</span><span class="s">1</span>
</pre></div>
</div>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>This will cause TorchServe to bind on all interfaces. For security in real-world applications, you’ll probably want to use port 8443 and <a class="reference external" href="https://pytorch.org/serve/configuration.html#enable-ssl">enable SSL</a>.</p>
</div>
</div>
<div class="section" id="run-torchserve">
<span id="torchserve-run"></span><h2><a class="toc-backref" href="#id9">Run TorchServe</a><a class="headerlink" href="#run-torchserve" title="Permalink to this headline">#</a></h2>
<p>It’s time to start the server. Typically we’d want to launch this in a separate console, but for this demo we’ll just redirect output to a file.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>torchserve<span class="w"> </span>--start<span class="w"> </span>--ncs<span class="w"> </span>--model-store<span class="w"> </span>model_store<span class="w"> </span>--ts-config<span class="w"> </span>torchserve.config<span class="w"> </span><span class="m">2</span>><span class="p">&</span><span class="m">1</span><span class="w"> </span>>torchserve.log
</pre></div>
</div>
<p>Verify that the server seems to have started okay.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>curl<span class="w"> </span>http://127.0.0.1:8080/ping
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="p">{</span>
<span class="s2">"status"</span><span class="p">:</span> <span class="s2">"Healthy"</span>
<span class="p">}</span>
</pre></div>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>If you get an error when trying to ping the server, you may have tried before the server was fully launched. Check <code class="docutils literal notranslate"><span class="pre">torchserve.log</span></code> for details.</p>
</div>
<p>Use the Management API to instruct TorchServe to load our model.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span><span class="nv">MAX_BATCH_DELAY</span><span class="o">=</span><span class="m">5000</span><span class="w"> </span><span class="c1"># ms timeout before a partial batch is processed</span>
$<span class="w"> </span><span class="nv">INITIAL_WORKERS</span><span class="o">=</span><span class="m">4</span><span class="w"> </span><span class="c1"># number of models that will be loaded at launch</span>
$<span class="w"> </span>curl<span class="w"> </span>-X<span class="w"> </span>POST<span class="w"> </span><span class="s2">"http://localhost:8081/models?url=</span><span class="nv">$MODEL_NAME</span><span class="s2">.mar&batch_size=</span><span class="nv">$BATCH_SIZE</span><span class="s2">&initial_workers=</span><span class="nv">$INITIAL_WORKERS</span><span class="s2">&max_batch_delay=</span><span class="nv">$MAX_BATCH_DELAY</span><span class="s2">"</span>
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="p">{</span>
<span class="s2">"status"</span><span class="p">:</span> <span class="s2">"Model </span><span class="se">\"</span><span class="s2">bert-max_length128-batch_size6</span><span class="se">\"</span><span class="s2"> Version: 1.0 registered with 4 initial workers"</span>
<span class="p">}</span>
</pre></div>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Any additional attempts to configure the model after the initial curl request will cause the server to return a 409 error. You’ll need to stop/start/configure the server to realize any changes.</p>
</div>
<p>The <code class="docutils literal notranslate"><span class="pre">MAX_BATCH_DELAY</span></code> is a timeout value that determines how long to wait before processing a partial batch. This is why the handler code needs to check the batch dimension and potentially add padding. TorchServe will instantiate the number of model handlers indicated by <code class="docutils literal notranslate"><span class="pre">INITIAL_WORKERS</span></code>, so this value controls how many models we will load onto Inferentia in parallel. This tutorial was performed on an inf1.xlarge instance (one Inferentia chip), so there are four NeuronCores available. If you want to control worker scaling more dynamically, <a class="reference external" href="https://pytorch.org/serve/management_api.html#scale-workers">see the docs</a>.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>If you attempt to load more models than NeuronCores available, one of two things will occur. Either the extra models will fit in device memory but performance will suffer, or you will encounter an error on your initial inference. You shouldn’t set <code class="docutils literal notranslate"><span class="pre">INITIAL_WORKERS</span></code> above the number of NeuronCores. However, you may want to use fewer cores if you are using the <a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html#neuroncore-pipeline"><span class="std std-ref">NeuronCore Pipeline</span></a> feature.</p>
</div>
<p>It looks like everything is running successfully at this point, so it’s time for an inference.</p>
<p>Create the <code class="docutils literal notranslate"><span class="pre">infer_bert.py</span></code> file below on your instance.</p>
<div class="literal-block-wrapper docutils container" id="id4">
<div class="code-block-caption"><span class="caption-text"><a class="reference download internal" download="" href="../../../../_downloads/e013b46f861cc3cf3ca2fd5978537bd0/infer_bert.py"><code class="xref download docutils literal notranslate"><span class="pre">infer_bert.py</span></code></a></span><a class="headerlink" href="#id4" title="Permalink to this code">#</a></div>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="linenos"> 1</span><span class="kn">import</span> <span class="nn">json</span>
<span class="linenos"> 2</span><span class="kn">import</span> <span class="nn">concurrent.futures</span>
<span class="linenos"> 3</span><span class="kn">import</span> <span class="nn">requests</span>
<span class="linenos"> 4</span>
<span class="linenos"> 5</span><span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'config.json'</span><span class="p">)</span> <span class="k">as</span> <span class="n">fp</span><span class="p">:</span>
<span class="linenos"> 6</span> <span class="n">config</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">fp</span><span class="p">)</span>
<span class="linenos"> 7</span><span class="n">max_length</span> <span class="o">=</span> <span class="n">config</span><span class="p">[</span><span class="s1">'max_length'</span><span class="p">]</span>
<span class="linenos"> 8</span><span class="n">batch_size</span> <span class="o">=</span> <span class="n">config</span><span class="p">[</span><span class="s1">'batch_size'</span><span class="p">]</span>
<span class="linenos"> 9</span><span class="n">name</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">'bert-max_length</span><span class="si">{</span><span class="n">max_length</span><span class="si">}</span><span class="s1">-batch_size</span><span class="si">{</span><span class="n">batch_size</span><span class="si">}</span><span class="s1">'</span>
<span class="linenos">10</span>
<span class="linenos">11</span><span class="c1"># dispatch requests in parallel</span>
<span class="linenos">12</span><span class="n">url</span> <span class="o">=</span> <span class="sa">f</span><span class="s1">'http://localhost:8080/predictions/</span><span class="si">{</span><span class="n">name</span><span class="si">}</span><span class="s1">'</span>
<span class="linenos">13</span><span class="n">paraphrase</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'seq_0'</span><span class="p">:</span> <span class="s2">"HuggingFace's headquarters are situated in Manhattan"</span><span class="p">,</span>
<span class="linenos">14</span> <span class="s1">'seq_1'</span><span class="p">:</span> <span class="s2">"The company HuggingFace is based in New York City"</span><span class="p">}</span>
<span class="linenos">15</span><span class="n">not_paraphrase</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'seq_0'</span><span class="p">:</span> <span class="n">paraphrase</span><span class="p">[</span><span class="s1">'seq_0'</span><span class="p">],</span> <span class="s1">'seq_1'</span><span class="p">:</span> <span class="s1">'This is total nonsense.'</span><span class="p">}</span>
<span class="linenos">16</span>
<span class="linenos">17</span><span class="k">with</span> <span class="n">concurrent</span><span class="o">.</span><span class="n">futures</span><span class="o">.</span><span class="n">ThreadPoolExecutor</span><span class="p">(</span><span class="n">max_workers</span><span class="o">=</span><span class="n">batch_size</span><span class="p">)</span> <span class="k">as</span> <span class="n">executor</span><span class="p">:</span>
<span class="linenos">18</span> <span class="k">def</span> <span class="nf">worker_thread</span><span class="p">(</span><span class="n">worker_index</span><span class="p">):</span>
<span class="linenos">19</span> <span class="c1"># we'll send half the requests as not_paraphrase examples for sanity</span>
<span class="linenos">20</span> <span class="n">data</span> <span class="o">=</span> <span class="n">paraphrase</span> <span class="k">if</span> <span class="n">worker_index</span> <span class="o"><</span> <span class="n">batch_size</span><span class="o">//</span><span class="mi">2</span> <span class="k">else</span> <span class="n">not_paraphrase</span>
<span class="linenos">21</span> <span class="n">response</span> <span class="o">=</span> <span class="n">requests</span><span class="o">.</span><span class="n">post</span><span class="p">(</span><span class="n">url</span><span class="p">,</span> <span class="n">data</span><span class="o">=</span><span class="n">data</span><span class="p">)</span>
<span class="linenos">22</span> <span class="nb">print</span><span class="p">(</span><span class="n">worker_index</span><span class="p">,</span> <span class="n">response</span><span class="o">.</span><span class="n">json</span><span class="p">())</span>
<span class="linenos">23</span>
<span class="linenos">24</span> <span class="k">for</span> <span class="n">worker_index</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">batch_size</span><span class="p">):</span>
<span class="linenos">25</span> <span class="n">executor</span><span class="o">.</span><span class="n">submit</span><span class="p">(</span><span class="n">worker_thread</span><span class="p">,</span> <span class="n">worker_index</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>This script will send a <code class="docutils literal notranslate"><span class="pre">batch_size</span></code> number of requests to our model. In this example, we are using a model that estimates the probability that one sentence is a paraphrase of another. The script sends positive examples in the first half of the batch and negative examples in the second half.</p>
<p>Execute the script in your terminal.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>python<span class="w"> </span>infer_bert.py
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="mi">1</span> <span class="p">[</span><span class="s1">'paraphrase'</span><span class="p">]</span>
<span class="mi">3</span> <span class="p">[</span><span class="s1">'not paraphrase'</span><span class="p">]</span>
<span class="mi">4</span> <span class="p">[</span><span class="s1">'not paraphrase'</span><span class="p">]</span>
<span class="mi">0</span> <span class="p">[</span><span class="s1">'paraphrase'</span><span class="p">]</span>
<span class="mi">5</span> <span class="p">[</span><span class="s1">'not paraphrase'</span><span class="p">]</span>
<span class="mi">2</span> <span class="p">[</span><span class="s1">'paraphrase'</span><span class="p">]</span>
</pre></div>
</div>
<p>We can see that the first three threads (0, 1, 2) all report <code class="docutils literal notranslate"><span class="pre">paraphrase</span></code>, as expected. If we instead modify the script to send an incomplete batch and then wait for the timeout to expire, the excess padding results will be discarded.</p>
</div>
<div class="section" id="benchmark-torchserve">
<span id="torchserve-benchmark"></span><h2><a class="toc-backref" href="#id10">Benchmark TorchServe</a><a class="headerlink" href="#benchmark-torchserve" title="Permalink to this headline">#</a></h2>
<p>We’ve seen how to perform a single batched inference, but how many inferences can we process per second? A separate upcoming tutorial will document performance tuning to maximize throughput. In the meantime, we can still perform a simple naïve stress test. The code below will spawn 64 worker threads, with each thread repeatedly sending a full batch of data to process. A separate thread will periodically print throughput and latency measurements.</p>
<div class="literal-block-wrapper docutils container" id="id5">
<div class="code-block-caption"><span class="caption-text"><a class="reference download internal" download="" href="../../../../_downloads/7b0993cb40a112b1abc9f33f0b6335bb/benchmark_bert.py"><code class="xref download docutils literal notranslate"><span class="pre">benchmark_bert.py</span></code></a></span><a class="headerlink" href="#id5" title="Permalink to this code">#</a></div>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="linenos"> 1</span><span class="kn">import</span> <span class="nn">os</span>
<span class="linenos"> 2</span><span class="kn">import</span> <span class="nn">argparse</span>
<span class="linenos"> 3</span><span class="kn">import</span> <span class="nn">time</span>
<span class="linenos"> 4</span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="linenos"> 5</span><span class="kn">import</span> <span class="nn">requests</span>
<span class="linenos"> 6</span><span class="kn">import</span> <span class="nn">sys</span>
<span class="linenos"> 7</span><span class="kn">from</span> <span class="nn">concurrent</span> <span class="kn">import</span> <span class="n">futures</span>
<span class="linenos"> 8</span>
<span class="linenos"> 9</span><span class="kn">import</span> <span class="nn">torch</span>
<span class="linenos">10</span>
<span class="linenos">11</span>
<span class="linenos">12</span><span class="n">parser</span> <span class="o">=</span> <span class="n">argparse</span><span class="o">.</span><span class="n">ArgumentParser</span><span class="p">()</span>
<span class="linenos">13</span><span class="n">parser</span><span class="o">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s1">'--url'</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s1">'Torchserve model URL'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">str</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="sa">f</span><span class="s1">'http://127.0.0.1:8080/predictions/bert-max_length128-batch_size6'</span><span class="p">)</span>
<span class="linenos">14</span><span class="n">parser</span><span class="o">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s1">'--num_thread'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">64</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s1">'Number of threads invoking the model URL'</span><span class="p">)</span>
<span class="linenos">15</span><span class="n">parser</span><span class="o">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s1">'--batch_size'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">6</span><span class="p">)</span>
<span class="linenos">16</span><span class="n">parser</span><span class="o">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s1">'--sequence_length'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">128</span><span class="p">)</span>
<span class="linenos">17</span><span class="n">parser</span><span class="o">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s1">'--latency_window_size'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">1000</span><span class="p">)</span>
<span class="linenos">18</span><span class="n">parser</span><span class="o">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s1">'--throughput_time'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">300</span><span class="p">)</span>
<span class="linenos">19</span><span class="n">parser</span><span class="o">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s1">'--throughput_interval'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span>
<span class="linenos">20</span><span class="n">args</span> <span class="o">=</span> <span class="n">parser</span><span class="o">.</span><span class="n">parse_args</span><span class="p">()</span>
<span class="linenos">21</span>
<span class="linenos">22</span><span class="n">data</span> <span class="o">=</span> <span class="p">{</span> <span class="s1">'seq_0'</span><span class="p">:</span> <span class="s1">'A completely made up sentence.'</span><span class="p">,</span>
<span class="linenos">23</span> <span class="s1">'seq_1'</span><span class="p">:</span> <span class="s1">'Well, I suppose they are all made up.'</span> <span class="p">}</span>
<span class="linenos">24</span><span class="n">live</span> <span class="o">=</span> <span class="kc">True</span>
<span class="linenos">25</span><span class="n">num_infer</span> <span class="o">=</span> <span class="mi">0</span>
<span class="linenos">26</span><span class="n">latency_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="linenos">27</span>
<span class="linenos">28</span>
<span class="linenos">29</span><span class="k">def</span> <span class="nf">one_thread</span><span class="p">(</span><span class="n">pred</span><span class="p">,</span> <span class="n">feed_data</span><span class="p">):</span>
<span class="linenos">30</span> <span class="k">global</span> <span class="n">latency_list</span>
<span class="linenos">31</span> <span class="k">global</span> <span class="n">num_infer</span>
<span class="linenos">32</span> <span class="k">global</span> <span class="n">live</span>
<span class="linenos">33</span> <span class="n">session</span> <span class="o">=</span> <span class="n">requests</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span>
<span class="linenos">34</span> <span class="k">while</span> <span class="kc">True</span><span class="p">:</span>
<span class="linenos">35</span> <span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="linenos">36</span> <span class="n">result</span> <span class="o">=</span> <span class="n">session</span><span class="o">.</span><span class="n">post</span><span class="p">(</span><span class="n">pred</span><span class="p">,</span> <span class="n">data</span><span class="o">=</span><span class="n">feed_data</span><span class="p">)</span>
<span class="linenos">37</span> <span class="n">latency</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">start</span>
<span class="linenos">38</span> <span class="n">latency_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">latency</span><span class="p">)</span>
<span class="linenos">39</span> <span class="n">num_infer</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="linenos">40</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">live</span><span class="p">:</span>
<span class="linenos">41</span> <span class="k">break</span>
<span class="linenos">42</span>
<span class="linenos">43</span>
<span class="linenos">44</span><span class="k">def</span> <span class="nf">current_performance</span><span class="p">():</span>
<span class="linenos">45</span> <span class="n">last_num_infer</span> <span class="o">=</span> <span class="n">num_infer</span>
<span class="linenos">46</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">args</span><span class="o">.</span><span class="n">throughput_time</span> <span class="o">//</span> <span class="n">args</span><span class="o">.</span><span class="n">throughput_interval</span><span class="p">):</span>
<span class="linenos">47</span> <span class="n">current_num_infer</span> <span class="o">=</span> <span class="n">num_infer</span>
<span class="linenos">48</span> <span class="n">throughput</span> <span class="o">=</span> <span class="p">(</span><span class="n">current_num_infer</span> <span class="o">-</span> <span class="n">last_num_infer</span><span class="p">)</span> <span class="o">/</span> <span class="n">args</span><span class="o">.</span><span class="n">throughput_interval</span>
<span class="linenos">49</span> <span class="n">p50</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="linenos">50</span> <span class="n">p90</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="linenos">51</span> <span class="k">if</span> <span class="n">latency_list</span><span class="p">:</span>
<span class="linenos">52</span> <span class="n">p50</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">[</span><span class="o">-</span><span class="n">args</span><span class="o">.</span><span class="n">latency_window_size</span><span class="p">:],</span> <span class="mi">50</span><span class="p">)</span>
<span class="linenos">53</span> <span class="n">p90</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">[</span><span class="o">-</span><span class="n">args</span><span class="o">.</span><span class="n">latency_window_size</span><span class="p">:],</span> <span class="mi">90</span><span class="p">)</span>
<span class="linenos">54</span> <span class="nb">print</span><span class="p">(</span><span class="s1">'pid </span><span class="si">{}</span><span class="s1">: current throughput </span><span class="si">{}</span><span class="s1">, latency p50=</span><span class="si">{:.3f}</span><span class="s1"> p90=</span><span class="si">{:.3f}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">getpid</span><span class="p">(),</span> <span class="n">throughput</span><span class="p">,</span> <span class="n">p50</span><span class="p">,</span> <span class="n">p90</span><span class="p">))</span>
<span class="linenos">55</span> <span class="n">sys</span><span class="o">.</span><span class="n">stdout</span><span class="o">.</span><span class="n">flush</span><span class="p">()</span>
<span class="linenos">56</span> <span class="n">last_num_infer</span> <span class="o">=</span> <span class="n">current_num_infer</span>
<span class="linenos">57</span> <span class="n">time</span><span class="o">.</span><span class="n">sleep</span><span class="p">(</span><span class="n">args</span><span class="o">.</span><span class="n">throughput_interval</span><span class="p">)</span>
<span class="linenos">58</span> <span class="k">global</span> <span class="n">live</span>
<span class="linenos">59</span> <span class="n">live</span> <span class="o">=</span> <span class="kc">False</span>
<span class="linenos">60</span>
<span class="linenos">61</span>
<span class="linenos">62</span><span class="k">with</span> <span class="n">futures</span><span class="o">.</span><span class="n">ThreadPoolExecutor</span><span class="p">(</span><span class="n">max_workers</span><span class="o">=</span><span class="n">args</span><span class="o">.</span><span class="n">num_thread</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="k">as</span> <span class="n">executor</span><span class="p">:</span>
<span class="linenos">63</span> <span class="n">executor</span><span class="o">.</span><span class="n">submit</span><span class="p">(</span><span class="n">current_performance</span><span class="p">)</span>
<span class="linenos">64</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">args</span><span class="o">.</span><span class="n">num_thread</span><span class="p">):</span>
<span class="linenos">65</span> <span class="n">executor</span><span class="o">.</span><span class="n">submit</span><span class="p">(</span><span class="n">one_thread</span><span class="p">,</span> <span class="n">args</span><span class="o">.</span><span class="n">url</span><span class="p">,</span> <span class="n">data</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Run the benchmarking script.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>python<span class="w"> </span>benchmark_bert.py
</pre></div>
</div>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">pid</span> <span class="mi">28523</span><span class="p">:</span> <span class="n">current</span> <span class="n">throughput</span> <span class="mf">0.0</span><span class="p">,</span> <span class="n">latency</span> <span class="n">p50</span><span class="o">=</span><span class="mf">0.000</span> <span class="n">p90</span><span class="o">=</span><span class="mf">0.000</span>
<span class="n">pid</span> <span class="mi">28523</span><span class="p">:</span> <span class="n">current</span> <span class="n">throughput</span> <span class="mf">617.7</span><span class="p">,</span> <span class="n">latency</span> <span class="n">p50</span><span class="o">=</span><span class="mf">0.092</span> <span class="n">p90</span><span class="o">=</span><span class="mf">0.156</span>
<span class="n">pid</span> <span class="mi">28523</span><span class="p">:</span> <span class="n">current</span> <span class="n">throughput</span> <span class="mf">697.3</span><span class="p">,</span> <span class="n">latency</span> <span class="n">p50</span><span class="o">=</span><span class="mf">0.082</span> <span class="n">p90</span><span class="o">=</span><span class="mf">0.154</span>
<span class="n">pid</span> <span class="mi">28523</span><span class="p">:</span> <span class="n">current</span> <span class="n">throughput</span> <span class="mf">702.8</span><span class="p">,</span> <span class="n">latency</span> <span class="n">p50</span><span class="o">=</span><span class="mf">0.081</span> <span class="n">p90</span><span class="o">=</span><span class="mf">0.149</span>
<span class="n">pid</span> <span class="mi">28523</span><span class="p">:</span> <span class="n">current</span> <span class="n">throughput</span> <span class="mf">699.1</span><span class="p">,</span> <span class="n">latency</span> <span class="n">p50</span><span class="o">=</span><span class="mf">0.085</span> <span class="n">p90</span><span class="o">=</span><span class="mf">0.147</span>
<span class="n">pid</span> <span class="mi">28523</span><span class="p">:</span> <span class="n">current</span> <span class="n">throughput</span> <span class="mf">703.8</span><span class="p">,</span> <span class="n">latency</span> <span class="n">p50</span><span class="o">=</span><span class="mf">0.083</span> <span class="n">p90</span><span class="o">=</span><span class="mf">0.148</span>
<span class="n">pid</span> <span class="mi">28523</span><span class="p">:</span> <span class="n">current</span> <span class="n">throughput</span> <span class="mf">699.3</span><span class="p">,</span> <span class="n">latency</span> <span class="n">p50</span><span class="o">=</span><span class="mf">0.083</span> <span class="n">p90</span><span class="o">=</span><span class="mf">0.148</span>
<span class="o">...</span>
</pre></div>
</div>
<p><strong>Congratulations!</strong> By now you should have successfully served a batched model over TorchServe.</p>
<p>You can now shutdown torchserve.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>torchserve<span class="w"> </span>--stop
</pre></div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../src/examples/pytorch/transformers-marianmt.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Transformers MarianMT Tutorial</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:45.825Z
|
Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.html
|
# Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container — AWS Neuron Documentation
## Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container[#](#Deploy-a-pretrained-PyTorch-BERT-model-from-HuggingFace-on-Amazon-SageMaker-with-Neuron-container "Permalink to this headline")
## Overview[#](#Overview "Permalink to this headline")
In this tutotial we will deploy on SageMaker a pretraine BERT Base model from HuggingFace Transformers, using the [AWS Deep Learning Containers](https://github.com/aws/deep-learning-containers). We will use the same model as shown in the [Neuron Tutorial “PyTorch - HuggingFace Pretrained BERT Tutorial”](../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html#). We will compile the model and build a custom AWS Deep Learning Container, to include the HuggingFace Transformers Library.
This Jupyter Notebook should run on a ml.c5.4xlarge SageMaker Notebook instance. You can set up your SageMaker Notebook instance by following the [Get Started with Amazon SageMaker Notebook Instances](https://docs.aws.amazon.com/sagemaker/latest/dg/gs-console.html) documentation.
> We recommend increasing the size of the base root volume of you SM notebook instance, to accomodate the models and containers built locally. A root volume of 10Gb should suffice.
## Install Dependencies:[#](#Install-Dependencies: "Permalink to this headline")
This tutorial requires the following pip packages:
- torch-neuron
- neuron-cc\[tensorflow\]
- transformers
```
%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
!pip install --upgrade --no-cache-dir torch-neuron neuron-cc[tensorflow] torchvision torch --extra-index-url=https://pip.repos.neuron.amazonaws.com
!pip install --upgrade --no-cache-dir 'transformers==4.6.0'
```
## Compile the model into an AWS Neuron optimized TorchScript[#](#Compile-the-model-into-an-AWS-Neuron-optimized-TorchScript "Permalink to this headline")
```
import torch
import torch_neuron
from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig
```
```
# Build tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=False)
# Setup some example inputs
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
max_length=128
paraphrase = tokenizer.encode_plus(sequence_0, sequence_2, max_length=max_length, padding='max_length', truncation=True, return_tensors="pt")
not_paraphrase = tokenizer.encode_plus(sequence_0, sequence_1, max_length=max_length, padding='max_length', truncation=True, return_tensors="pt")
# Run the original PyTorch model on compilation exaple
paraphrase_classification_logits = model(**paraphrase)[0]
# Convert example inputs to a format that is compatible with TorchScript tracing
example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']
example_inputs_not_paraphrase = not_paraphrase['input_ids'], not_paraphrase['attention_mask'], not_paraphrase['token_type_ids']
```
```
%%time
# Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron
# This step may need 3-5 min
model_neuron = torch.neuron.trace(model, example_inputs_paraphrase, verbose=1, compiler_workdir='./compilation_artifacts')
```
You may inspect **model\_neuron.graph** to see which part is running on CPU versus running on the accelerator. All native **aten** operators in the graph will be running on CPU.
```
# See which part is running on CPU versus running on the accelerator.
print(model_neuron.graph)
```
Save the compiled model, so it can be packaged and sent to S3.
```
# Save the TorchScript for later use
model_neuron.save('neuron_compiled_model.pt')
```
### Package the pre-trained model and upload it to S3[#](#Package-the-pre-trained-model-and-upload-it-to-S3 "Permalink to this headline")
To make the model available for the SageMaker deployment, you will TAR the serialized graph and upload it to the default Amazon S3 bucket for your SageMaker session.
```
# Now you'll create a model.tar.gz file to be used by SageMaker endpoint
!tar -czvf model.tar.gz neuron_compiled_model.pt
```
```
import boto3
import time
from sagemaker.utils import name_from_base
import sagemaker
```
```
# upload model to S3
role = sagemaker.get_execution_role()
sess=sagemaker.Session()
region=sess.boto_region_name
bucket=sess.default_bucket()
sm_client=boto3.client('sagemaker')
```
```
model_key = '{}/model/model.tar.gz'.format('inf1_compiled_model')
model_path = 's3://{}/{}'.format(bucket, model_key)
boto3.resource('s3').Bucket(bucket).upload_file('model.tar.gz', model_key)
print("Uploaded model to S3:")
print(model_path)
```
## Build and Push the container[#](#Build-and-Push-the-container "Permalink to this headline")
The following shell code shows how to build the container image using docker build and push the container image to ECR using docker push. The Dockerfile in this example is available in the **container** folder. Here’s an example of the Dockerfile:
```
FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference-neuron:1.7.1-neuron-py36-ubuntu18.04
# Install packages
RUN pip install "transformers==4.7.0"
```
```
!cat container/Dockerfile
```
Before running the next cell, make sure your SageMaker IAM role has access to ECR. If not, you can attache the role `AmazonEC2ContainerRegistryPowerUser` to your IAM role ARN, which allows you to upload image layers to ECR.
It takes 5 minutes to build docker images and upload image to ECR
```
%%sh
# The name of our algorithm
algorithm_name=neuron-py36-inference
cd container
account=$(aws sts get-caller-identity --query Account --output text)
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR in order to pull down the SageMaker PyTorch image
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} . --build-arg REGION=${region}
docker tag ${algorithm_name} ${fullname}
# Get the login command from ECR and execute it directly
aws ecr get-login-password --region ${region} | docker login --username AWS --password-stdin ${account}.dkr.ecr.${region}.amazonaws.com
docker push ${fullname}
```
## Deploy Container and run inference based on the pretrained model[#](#Deploy-Container-and-run-inference-based-on-the-pretrained-model "Permalink to this headline")
To deploy a pretrained PyTorch model, you’ll need to use the PyTorch estimator object to create a PyTorchModel object and set a different entry\_point.
You’ll use the PyTorchModel object to deploy a PyTorchPredictor. This creates a SageMaker Endpoint – a hosted prediction service that we can use to perform inference.
```
import sys
!{sys.executable} -m pip install Transformers
```
```
import os
import boto3
import sagemaker
role = sagemaker.get_execution_role()
sess = sagemaker.Session()
bucket = sess.default_bucket()
prefix = "inf1_compiled_model/model"
# Get container name in ECR
client=boto3.client('sts')
account=client.get_caller_identity()['Account']
my_session=boto3.session.Session()
region=my_session.region_name
algorithm_name="neuron-py36-inference"
ecr_image='{}.dkr.ecr.{}.amazonaws.com/{}:latest'.format(account, region, algorithm_name)
print(ecr_image)
```
An implementation of _model\_fn_ is required for inference script. We are going to implement our own **model\_fn** and **predict\_fn** for Hugging Face Bert, and use default implementations of **input\_fn** and **output\_fn** defined in sagemaker-pytorch-containers.
In this example, the inference script is put in **code** folder. Run the next cell to see it:
```
!pygmentize code/inference.py
```
Path of compiled pretrained model in S3:
```
key = os.path.join(prefix, "model.tar.gz")
pretrained_model_data = "s3://{}/{}".format(bucket, key)
print(pretrained_model_data)
```
The model object is defined by using the SageMaker Python SDK’s PyTorchModel and pass in the model from the estimator and the entry\_point. The endpoint’s entry point for inference is defined by model\_fn as seen in the previous code block that prints out **inference.py**. The model\_fn function will load the model and required tokenizer.
Note, **image\_uri** must be user’s own ECR images.
```
from sagemaker.pytorch.model import PyTorchModel
pytorch_model = PyTorchModel(
model_data=pretrained_model_data,
role=role,
source_dir="code",
framework_version="1.7.1",
entry_point="inference.py",
image_uri=ecr_image
)
# Let SageMaker know that we've already compiled the model via neuron-cc
pytorch_model._is_compiled_model = True
```
The arguments to the deploy function allow us to set the number and type of instances that will be used for the Endpoint.
Here you will deploy the model to a single **ml.inf1.2xlarge** instance. It may take 6-10 min to deploy.
```
%%time
predictor = pytorch_model.deploy(initial_instance_count=1, instance_type="ml.inf1.2xlarge")
```
```
print(predictor.endpoint_name)
```
Since in the input\_fn we declared that the incoming requests are json-encoded, we need to use a json serializer, to encode the incoming data into a json string. Also, we declared the return content type to be json string, we Need to use a json deserializer to parse the response.
```
predictor.serializer = sagemaker.serializers.JSONSerializer()
predictor.deserializer = sagemaker.deserializers.JSONDeserializer()
```
Using a list of sentences, now SageMaker endpoint is invoked to get predictions.
```
%%time
result = predictor.predict(
[
"Never allow the same bug to bite you twice.",
"The best part of Amazon SageMaker is that it makes machine learning easy.",
]
)
print(result)
```
```
%%time
result = predictor.predict(
[
"The company HuggingFace is based in New York City",
"HuggingFace's headquarters are situated in Manhattan",
]
)
print(result)
```
## Benchmarking your endpoint[#](#Benchmarking-your-endpoint "Permalink to this headline")
The following cells create a load test for your endpoint. You first define some helper functions: `inference_latency` runs the endpoint request, collects cliend side latency and any errors, `random_sentence` builds random to be sent to the endpoint.
```
import numpy as np
import datetime
import math
import time
import boto3
import matplotlib.pyplot as plt
from joblib import Parallel, delayed
import numpy as np
from tqdm import tqdm
import random
```
```
def inference_latency(model,*inputs):
"""
infetence_time is a simple method to return the latency of a model inference.
Parameters:
model: torch model onbject loaded using torch.jit.load
inputs: model() args
Returns:
latency in seconds
"""
error = False
start = time.time()
try:
results = model(*inputs)
except:
error = True
results = []
return {'latency':time.time() - start, 'error': error, 'result': results}
```
```
def random_sentence():
s_nouns = ["A dude", "My mom", "The king", "Some guy", "A cat with rabies", "A sloth", "Your homie", "This cool guy my gardener met yesterday", "Superman"]
p_nouns = ["These dudes", "Both of my moms", "All the kings of the world", "Some guys", "All of a cattery's cats", "The multitude of sloths living under your bed", "Your homies", "Like, these, like, all these people", "Supermen"]
s_verbs = ["eats", "kicks", "gives", "treats", "meets with", "creates", "hacks", "configures", "spies on", "retards", "meows on", "flees from", "tries to automate", "explodes"]
p_verbs = ["eat", "kick", "give", "treat", "meet with", "create", "hack", "configure", "spy on", "retard", "meow on", "flee from", "try to automate", "explode"]
infinitives = ["to make a pie.", "for no apparent reason.", "because the sky is green.", "for a disease.", "to be able to make toast explode.", "to know more about archeology."]
return (random.choice(s_nouns) + ' ' + random.choice(s_verbs) + ' ' + random.choice(s_nouns).lower() or random.choice(p_nouns).lower() + ' ' + random.choice(infinitives))
print([random_sentence(), random_sentence()])
```
The following cell creates `number_of_clients` concurrent threads to run `number_of_runs` requests. Once completed, a `boto3` CloudWatch client will query for the server side latency metrics for comparison.
```
# Defining Auxiliary variables
number_of_clients = 2
number_of_runs = 1000
t = tqdm(range(number_of_runs),position=0, leave=True)
# Starting parallel clients
cw_start = datetime.datetime.utcnow()
results = Parallel(n_jobs=number_of_clients,prefer="threads")(delayed(inference_latency)(predictor.predict,[random_sentence(), random_sentence()]) for mod in t)
avg_throughput = t.total/t.format_dict['elapsed']
cw_end = datetime.datetime.utcnow()
# Computing metrics and print
latencies = [res['latency'] for res in results]
errors = [res['error'] for res in results]
error_p = sum(errors)/len(errors) *100
p50 = np.quantile(latencies[-1000:],0.50) * 1000
p90 = np.quantile(latencies[-1000:],0.95) * 1000
p95 = np.quantile(latencies[-1000:],0.99) * 1000
print(f'Avg Throughput: :{avg_throughput:.1f}\n')
print(f'50th Percentile Latency:{p50:.1f} ms')
print(f'90th Percentile Latency:{p90:.1f} ms')
print(f'95th Percentile Latency:{p95:.1f} ms\n')
print(f'Errors percentage: {error_p:.1f} %\n')
# Querying CloudWatch
print('Getting Cloudwatch:')
cloudwatch = boto3.client('cloudwatch')
statistics=['SampleCount', 'Average', 'Minimum', 'Maximum']
extended=['p50', 'p90', 'p95', 'p100']
# Give 5 minute buffer to end
cw_end += datetime.timedelta(minutes=5)
# Period must be 1, 5, 10, 30, or multiple of 60
# Calculate closest multiple of 60 to the total elapsed time
factor = math.ceil((cw_end - cw_start).total_seconds() / 60)
period = factor * 60
print('Time elapsed: {} seconds'.format((cw_end - cw_start).total_seconds()))
print('Using period of {} seconds\n'.format(period))
cloudwatch_ready = False
# Keep polling CloudWatch metrics until datapoints are available
while not cloudwatch_ready:
time.sleep(30)
print('Waiting 30 seconds ...')
# Must use default units of microseconds
model_latency_metrics = cloudwatch.get_metric_statistics(MetricName='ModelLatency',
Dimensions=[{'Name': 'EndpointName',
'Value': predictor.endpoint_name},
{'Name': 'VariantName',
'Value': "AllTraffic"}],
Namespace="AWS/SageMaker",
StartTime=cw_start,
EndTime=cw_end,
Period=period,
Statistics=statistics,
ExtendedStatistics=extended
)
# Should be 1000
if len(model_latency_metrics['Datapoints']) > 0:
print('{} latency datapoints ready'.format(model_latency_metrics['Datapoints'][0]['SampleCount']))
side_avg = model_latency_metrics['Datapoints'][0]['Average'] / number_of_runs
side_p50 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p50'] / number_of_runs
side_p90 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p90'] / number_of_runs
side_p95 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p95'] / number_of_runs
side_p100 = model_latency_metrics['Datapoints'][0]['ExtendedStatistics']['p100'] / number_of_runs
print(f'50th Percentile Latency:{side_p50:.1f} ms')
print(f'90th Percentile Latency:{side_p90:.1f} ms')
print(f'95th Percentile Latency:{side_p95:.1f} ms\n')
cloudwatch_ready = True
```
### Cleanup[#](#Cleanup "Permalink to this headline")
Endpoints should be deleted when no longer in use, to avoid costs.
```
predictor.delete_endpoint(predictor.endpoint)
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="BERT TorchServe Tutorial" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.html">
<link rel="prev" title="Compiling and Deploying HuggingFace Pretrained BERT" href="../bert_tutorial/tutorial_pretrained_bert.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/pytorch/byoc_sm_bert_tutorial/sagemaker_container_neuron.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Overview">
Overview
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies:">
Install Dependencies:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-the-model-into-an-AWS-Neuron-optimized-TorchScript">
Compile the model into an AWS Neuron optimized TorchScript
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Package-the-pre-trained-model-and-upload-it-to-S3">
Package the pre-trained model and upload it to S3
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Build-and-Push-the-container">
Build and Push the container
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-Container-and-run-inference-based-on-the-pretrained-model">
Deploy Container and run inference based on the pretrained model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Benchmarking-your-endpoint">
Benchmarking your endpoint
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Cleanup">
Cleanup
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Overview">
Overview
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies:">
Install Dependencies:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-the-model-into-an-AWS-Neuron-optimized-TorchScript">
Compile the model into an AWS Neuron optimized TorchScript
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Package-the-pre-trained-model-and-upload-it-to-S3">
Package the pre-trained model and upload it to S3
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Build-and-Push-the-container">
Build and Push the container
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-Container-and-run-inference-based-on-the-pretrained-model">
Deploy Container and run inference based on the pretrained model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Benchmarking-your-endpoint">
Benchmarking your endpoint
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Cleanup">
Cleanup
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Deploy-a-pretrained-PyTorch-BERT-model-from-HuggingFace-on-Amazon-SageMaker-with-Neuron-container">
<h1>Deploy a pretrained PyTorch BERT model from HuggingFace on Amazon SageMaker with Neuron container<a class="headerlink" href="#Deploy-a-pretrained-PyTorch-BERT-model-from-HuggingFace-on-Amazon-SageMaker-with-Neuron-container" title="Permalink to this headline">#</a></h1>
<div class="section" id="Overview">
<h2>Overview<a class="headerlink" href="#Overview" title="Permalink to this headline">#</a></h2>
<p>In this tutotial we will deploy on SageMaker a pretraine BERT Base model from HuggingFace Transformers, using the <a class="reference external" href="https://github.com/aws/deep-learning-containers">AWS Deep Learning Containers</a>. We will use the same model as shown in the <a class="reference external" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html#">Neuron Tutorial “PyTorch - HuggingFace Pretrained BERT Tutorial”</a>. We will compile the model and build a custom AWS Deep Learning Container, to include the HuggingFace Transformers
Library.</p>
<p>This Jupyter Notebook should run on a ml.c5.4xlarge SageMaker Notebook instance. You can set up your SageMaker Notebook instance by following the <a class="reference external" href="https://docs.aws.amazon.com/sagemaker/latest/dg/gs-console.html">Get Started with Amazon SageMaker Notebook Instances</a> documentation.</p>
<blockquote>
<div><p>We recommend increasing the size of the base root volume of you SM notebook instance, to accomodate the models and containers built locally. A root volume of 10Gb should suffice.</p>
</div></blockquote>
</div>
<div class="section" id="Install-Dependencies:">
<h2>Install Dependencies:<a class="headerlink" href="#Install-Dependencies:" title="Permalink to this headline">#</a></h2>
<p>This tutorial requires the following pip packages:</p>
<ul class="simple">
<li><p>torch-neuron</p></li>
<li><p>neuron-cc[tensorflow]</p></li>
<li><p>transformers</p></li>
</ul>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%</span><span class="k">env</span> TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
<span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span>--upgrade<span class="w"> </span>--no-cache-dir<span class="w"> </span>torch-neuron<span class="w"> </span>neuron-cc<span class="o">[</span>tensorflow<span class="o">]</span><span class="w"> </span>torchvision<span class="w"> </span>torch<span class="w"> </span>--extra-index-url<span class="o">=</span>https://pip.repos.neuron.amazonaws.com
<span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span>--upgrade<span class="w"> </span>--no-cache-dir<span class="w"> </span><span class="s1">'transformers==4.6.0'</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Compile-the-model-into-an-AWS-Neuron-optimized-TorchScript">
<h2>Compile the model into an AWS Neuron optimized TorchScript<a class="headerlink" href="#Compile-the-model-into-an-AWS-Neuron-optimized-TorchScript" title="Permalink to this headline">#</a></h2>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_neuron</span>
<span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">AutoTokenizer</span><span class="p">,</span> <span class="n">AutoModelForSequenceClassification</span><span class="p">,</span> <span class="n">AutoConfig</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Build tokenizer and model</span>
<span class="n">tokenizer</span> <span class="o">=</span> <span class="n">AutoTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s2">"bert-base-cased-finetuned-mrpc"</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">AutoModelForSequenceClassification</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s2">"bert-base-cased-finetuned-mrpc"</span><span class="p">,</span> <span class="n">return_dict</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="c1"># Setup some example inputs</span>
<span class="n">sequence_0</span> <span class="o">=</span> <span class="s2">"The company HuggingFace is based in New York City"</span>
<span class="n">sequence_1</span> <span class="o">=</span> <span class="s2">"Apples are especially bad for your health"</span>
<span class="n">sequence_2</span> <span class="o">=</span> <span class="s2">"HuggingFace's headquarters are situated in Manhattan"</span>
<span class="n">max_length</span><span class="o">=</span><span class="mi">128</span>
<span class="n">paraphrase</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="o">.</span><span class="n">encode_plus</span><span class="p">(</span><span class="n">sequence_0</span><span class="p">,</span> <span class="n">sequence_2</span><span class="p">,</span> <span class="n">max_length</span><span class="o">=</span><span class="n">max_length</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span> <span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">return_tensors</span><span class="o">=</span><span class="s2">"pt"</span><span class="p">)</span>
<span class="n">not_paraphrase</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="o">.</span><span class="n">encode_plus</span><span class="p">(</span><span class="n">sequence_0</span><span class="p">,</span> <span class="n">sequence_1</span><span class="p">,</span> <span class="n">max_length</span><span class="o">=</span><span class="n">max_length</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span> <span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">return_tensors</span><span class="o">=</span><span class="s2">"pt"</span><span class="p">)</span>
<span class="c1"># Run the original PyTorch model on compilation exaple</span>
<span class="n">paraphrase_classification_logits</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="o">**</span><span class="n">paraphrase</span><span class="p">)[</span><span class="mi">0</span><span class="p">]</span>
<span class="c1"># Convert example inputs to a format that is compatible with TorchScript tracing</span>
<span class="n">example_inputs_paraphrase</span> <span class="o">=</span> <span class="n">paraphrase</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span> <span class="n">paraphrase</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">],</span> <span class="n">paraphrase</span><span class="p">[</span><span class="s1">'token_type_ids'</span><span class="p">]</span>
<span class="n">example_inputs_not_paraphrase</span> <span class="o">=</span> <span class="n">not_paraphrase</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span> <span class="n">not_paraphrase</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">],</span> <span class="n">not_paraphrase</span><span class="p">[</span><span class="s1">'token_type_ids'</span><span class="p">]</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%%time</span>
<span class="c1"># Run torch.neuron.trace to generate a TorchScript that is optimized by AWS Neuron</span>
<span class="c1"># This step may need 3-5 min</span>
<span class="n">model_neuron</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">example_inputs_paraphrase</span><span class="p">,</span> <span class="n">verbose</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">compiler_workdir</span><span class="o">=</span><span class="s1">'./compilation_artifacts'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>You may inspect <strong>model_neuron.graph</strong> to see which part is running on CPU versus running on the accelerator. All native <strong>aten</strong> operators in the graph will be running on CPU.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># See which part is running on CPU versus running on the accelerator.</span>
<span class="nb">print</span><span class="p">(</span><span class="n">model_neuron</span><span class="o">.</span><span class="n">graph</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Save the compiled model, so it can be packaged and sent to S3.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Save the TorchScript for later use</span>
<span class="n">model_neuron</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'neuron_compiled_model.pt'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="Package-the-pre-trained-model-and-upload-it-to-S3">
<h3>Package the pre-trained model and upload it to S3<a class="headerlink" href="#Package-the-pre-trained-model-and-upload-it-to-S3" title="Permalink to this headline">#</a></h3>
<p>To make the model available for the SageMaker deployment, you will TAR the serialized graph and upload it to the default Amazon S3 bucket for your SageMaker session.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Now you'll create a model.tar.gz file to be used by SageMaker endpoint</span>
<span class="o">!</span>tar<span class="w"> </span>-czvf<span class="w"> </span>model.tar.gz<span class="w"> </span>neuron_compiled_model.pt
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">boto3</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">from</span> <span class="nn">sagemaker.utils</span> <span class="kn">import</span> <span class="n">name_from_base</span>
<span class="kn">import</span> <span class="nn">sagemaker</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># upload model to S3</span>
<span class="n">role</span> <span class="o">=</span> <span class="n">sagemaker</span><span class="o">.</span><span class="n">get_execution_role</span><span class="p">()</span>
<span class="n">sess</span><span class="o">=</span><span class="n">sagemaker</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span>
<span class="n">region</span><span class="o">=</span><span class="n">sess</span><span class="o">.</span><span class="n">boto_region_name</span>
<span class="n">bucket</span><span class="o">=</span><span class="n">sess</span><span class="o">.</span><span class="n">default_bucket</span><span class="p">()</span>
<span class="n">sm_client</span><span class="o">=</span><span class="n">boto3</span><span class="o">.</span><span class="n">client</span><span class="p">(</span><span class="s1">'sagemaker'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">model_key</span> <span class="o">=</span> <span class="s1">'</span><span class="si">{}</span><span class="s1">/model/model.tar.gz'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="s1">'inf1_compiled_model'</span><span class="p">)</span>
<span class="n">model_path</span> <span class="o">=</span> <span class="s1">'s3://</span><span class="si">{}</span><span class="s1">/</span><span class="si">{}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">bucket</span><span class="p">,</span> <span class="n">model_key</span><span class="p">)</span>
<span class="n">boto3</span><span class="o">.</span><span class="n">resource</span><span class="p">(</span><span class="s1">'s3'</span><span class="p">)</span><span class="o">.</span><span class="n">Bucket</span><span class="p">(</span><span class="n">bucket</span><span class="p">)</span><span class="o">.</span><span class="n">upload_file</span><span class="p">(</span><span class="s1">'model.tar.gz'</span><span class="p">,</span> <span class="n">model_key</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Uploaded model to S3:"</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">model_path</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
<div class="section" id="Build-and-Push-the-container">
<h2>Build and Push the container<a class="headerlink" href="#Build-and-Push-the-container" title="Permalink to this headline">#</a></h2>
<p>The following shell code shows how to build the container image using docker build and push the container image to ECR using docker push. The Dockerfile in this example is available in the <strong>container</strong> folder. Here’s an example of the Dockerfile:</p>
<div class="highlight-dockerfile notranslate"><div class="highlight"><pre><span></span><span class="k">FROM</span><span class="w"> </span><span class="s">763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference-neuron:1.7.1-neuron-py36-ubuntu18.04</span>
<span class="c"># Install packages</span>
<span class="k">RUN</span><span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span><span class="s2">"transformers==4.7.0"</span>
</pre></div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>cat<span class="w"> </span>container/Dockerfile
</pre></div>
</div>
</div>
<p>Before running the next cell, make sure your SageMaker IAM role has access to ECR. If not, you can attache the role <code class="docutils literal notranslate"><span class="pre">AmazonEC2ContainerRegistryPowerUser</span></code> to your IAM role ARN, which allows you to upload image layers to ECR.</p>
<p>It takes 5 minutes to build docker images and upload image to ECR</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-sh notranslate"><div class="highlight"><pre><span></span>%%sh
<span class="c1"># The name of our algorithm</span>
<span class="nv">algorithm_name</span><span class="o">=</span>neuron-py36-inference
<span class="nb">cd</span><span class="w"> </span>container
<span class="nv">account</span><span class="o">=</span><span class="k">$(</span>aws<span class="w"> </span>sts<span class="w"> </span>get-caller-identity<span class="w"> </span>--query<span class="w"> </span>Account<span class="w"> </span>--output<span class="w"> </span>text<span class="k">)</span>
<span class="c1"># Get the region defined in the current configuration (default to us-west-2 if none defined)</span>
<span class="nv">region</span><span class="o">=</span><span class="k">$(</span>aws<span class="w"> </span>configure<span class="w"> </span>get<span class="w"> </span>region<span class="k">)</span>
<span class="nv">region</span><span class="o">=</span><span class="si">${</span><span class="nv">region</span><span class="k">:-</span><span class="nv">us</span><span class="p">-west-2</span><span class="si">}</span>
<span class="nv">fullname</span><span class="o">=</span><span class="s2">"</span><span class="si">${</span><span class="nv">account</span><span class="si">}</span><span class="s2">.dkr.ecr.</span><span class="si">${</span><span class="nv">region</span><span class="si">}</span><span class="s2">.amazonaws.com/</span><span class="si">${</span><span class="nv">algorithm_name</span><span class="si">}</span><span class="s2">:latest"</span>
<span class="c1"># If the repository doesn't exist in ECR, create it.</span>
aws<span class="w"> </span>ecr<span class="w"> </span>describe-repositories<span class="w"> </span>--repository-names<span class="w"> </span><span class="s2">"</span><span class="si">${</span><span class="nv">algorithm_name</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>><span class="w"> </span>/dev/null<span class="w"> </span><span class="m">2</span>><span class="p">&</span><span class="m">1</span>
<span class="k">if</span><span class="w"> </span><span class="o">[</span><span class="w"> </span><span class="nv">$?</span><span class="w"> </span>-ne<span class="w"> </span><span class="m">0</span><span class="w"> </span><span class="o">]</span>
<span class="k">then</span>
<span class="w"> </span>aws<span class="w"> </span>ecr<span class="w"> </span>create-repository<span class="w"> </span>--repository-name<span class="w"> </span><span class="s2">"</span><span class="si">${</span><span class="nv">algorithm_name</span><span class="si">}</span><span class="s2">"</span><span class="w"> </span>><span class="w"> </span>/dev/null
<span class="k">fi</span>
<span class="c1"># Get the login command from ECR in order to pull down the SageMaker PyTorch image</span>
aws<span class="w"> </span>ecr<span class="w"> </span>get-login-password<span class="w"> </span>--region<span class="w"> </span>us-east-1<span class="w"> </span><span class="p">|</span><span class="w"> </span>docker<span class="w"> </span>login<span class="w"> </span>--username<span class="w"> </span>AWS<span class="w"> </span>--password-stdin<span class="w"> </span><span class="m">763104351884</span>.dkr.ecr.us-east-1.amazonaws.com
<span class="c1"># Build the docker image locally with the image name and then push it to ECR</span>
<span class="c1"># with the full name.</span>
docker<span class="w"> </span>build<span class="w"> </span>-t<span class="w"> </span><span class="si">${</span><span class="nv">algorithm_name</span><span class="si">}</span><span class="w"> </span>.<span class="w"> </span>--build-arg<span class="w"> </span><span class="nv">REGION</span><span class="o">=</span><span class="si">${</span><span class="nv">region</span><span class="si">}</span>
docker<span class="w"> </span>tag<span class="w"> </span><span class="si">${</span><span class="nv">algorithm_name</span><span class="si">}</span><span class="w"> </span><span class="si">${</span><span class="nv">fullname</span><span class="si">}</span>
<span class="c1"># Get the login command from ECR and execute it directly</span>
aws<span class="w"> </span>ecr<span class="w"> </span>get-login-password<span class="w"> </span>--region<span class="w"> </span><span class="si">${</span><span class="nv">region</span><span class="si">}</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>docker<span class="w"> </span>login<span class="w"> </span>--username<span class="w"> </span>AWS<span class="w"> </span>--password-stdin<span class="w"> </span><span class="si">${</span><span class="nv">account</span><span class="si">}</span>.dkr.ecr.<span class="si">${</span><span class="nv">region</span><span class="si">}</span>.amazonaws.com
docker<span class="w"> </span>push<span class="w"> </span><span class="si">${</span><span class="nv">fullname</span><span class="si">}</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Deploy-Container-and-run-inference-based-on-the-pretrained-model">
<h2>Deploy Container and run inference based on the pretrained model<a class="headerlink" href="#Deploy-Container-and-run-inference-based-on-the-pretrained-model" title="Permalink to this headline">#</a></h2>
<p>To deploy a pretrained PyTorch model, you’ll need to use the PyTorch estimator object to create a PyTorchModel object and set a different entry_point.</p>
<p>You’ll use the PyTorchModel object to deploy a PyTorchPredictor. This creates a SageMaker Endpoint – a hosted prediction service that we can use to perform inference.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">sys</span>
<span class="o">!{</span>sys.executable<span class="o">}</span><span class="w"> </span>-m<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>Transformers
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">boto3</span>
<span class="kn">import</span> <span class="nn">sagemaker</span>
<span class="n">role</span> <span class="o">=</span> <span class="n">sagemaker</span><span class="o">.</span><span class="n">get_execution_role</span><span class="p">()</span>
<span class="n">sess</span> <span class="o">=</span> <span class="n">sagemaker</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span>
<span class="n">bucket</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">default_bucket</span><span class="p">()</span>
<span class="n">prefix</span> <span class="o">=</span> <span class="s2">"inf1_compiled_model/model"</span>
<span class="c1"># Get container name in ECR</span>
<span class="n">client</span><span class="o">=</span><span class="n">boto3</span><span class="o">.</span><span class="n">client</span><span class="p">(</span><span class="s1">'sts'</span><span class="p">)</span>
<span class="n">account</span><span class="o">=</span><span class="n">client</span><span class="o">.</span><span class="n">get_caller_identity</span><span class="p">()[</span><span class="s1">'Account'</span><span class="p">]</span>
<span class="n">my_session</span><span class="o">=</span><span class="n">boto3</span><span class="o">.</span><span class="n">session</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span>
<span class="n">region</span><span class="o">=</span><span class="n">my_session</span><span class="o">.</span><span class="n">region_name</span>
<span class="n">algorithm_name</span><span class="o">=</span><span class="s2">"neuron-py36-inference"</span>
<span class="n">ecr_image</span><span class="o">=</span><span class="s1">'</span><span class="si">{}</span><span class="s1">.dkr.ecr.</span><span class="si">{}</span><span class="s1">.amazonaws.com/</span><span class="si">{}</span><span class="s1">:latest'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">account</span><span class="p">,</span> <span class="n">region</span><span class="p">,</span> <span class="n">algorithm_name</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">ecr_image</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>An implementation of <em>model_fn</em> is required for inference script. We are going to implement our own <strong>model_fn</strong> and <strong>predict_fn</strong> for Hugging Face Bert, and use default implementations of <strong>input_fn</strong> and <strong>output_fn</strong> defined in sagemaker-pytorch-containers.</p>
<p>In this example, the inference script is put in <strong>code</strong> folder. Run the next cell to see it:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>pygmentize<span class="w"> </span>code/inference.py
</pre></div>
</div>
</div>
<p>Path of compiled pretrained model in S3:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">key</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">prefix</span><span class="p">,</span> <span class="s2">"model.tar.gz"</span><span class="p">)</span>
<span class="n">pretrained_model_data</span> <span class="o">=</span> <span class="s2">"s3://</span><span class="si">{}</span><span class="s2">/</span><span class="si">{}</span><span class="s2">"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">bucket</span><span class="p">,</span> <span class="n">key</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">pretrained_model_data</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>The model object is defined by using the SageMaker Python SDK’s PyTorchModel and pass in the model from the estimator and the entry_point. The endpoint’s entry point for inference is defined by model_fn as seen in the previous code block that prints out <strong>inference.py</strong>. The model_fn function will load the model and required tokenizer.</p>
<p>Note, <strong>image_uri</strong> must be user’s own ECR images.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">sagemaker.pytorch.model</span> <span class="kn">import</span> <span class="n">PyTorchModel</span>
<span class="n">pytorch_model</span> <span class="o">=</span> <span class="n">PyTorchModel</span><span class="p">(</span>
<span class="n">model_data</span><span class="o">=</span><span class="n">pretrained_model_data</span><span class="p">,</span>
<span class="n">role</span><span class="o">=</span><span class="n">role</span><span class="p">,</span>
<span class="n">source_dir</span><span class="o">=</span><span class="s2">"code"</span><span class="p">,</span>
<span class="n">framework_version</span><span class="o">=</span><span class="s2">"1.7.1"</span><span class="p">,</span>
<span class="n">entry_point</span><span class="o">=</span><span class="s2">"inference.py"</span><span class="p">,</span>
<span class="n">image_uri</span><span class="o">=</span><span class="n">ecr_image</span>
<span class="p">)</span>
<span class="c1"># Let SageMaker know that we've already compiled the model via neuron-cc</span>
<span class="n">pytorch_model</span><span class="o">.</span><span class="n">_is_compiled_model</span> <span class="o">=</span> <span class="kc">True</span>
</pre></div>
</div>
</div>
<p>The arguments to the deploy function allow us to set the number and type of instances that will be used for the Endpoint.</p>
<p>Here you will deploy the model to a single <strong>ml.inf1.2xlarge</strong> instance. It may take 6-10 min to deploy.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%%time</span>
<span class="n">predictor</span> <span class="o">=</span> <span class="n">pytorch_model</span><span class="o">.</span><span class="n">deploy</span><span class="p">(</span><span class="n">initial_instance_count</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">instance_type</span><span class="o">=</span><span class="s2">"ml.inf1.2xlarge"</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="nb">print</span><span class="p">(</span><span class="n">predictor</span><span class="o">.</span><span class="n">endpoint_name</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Since in the input_fn we declared that the incoming requests are json-encoded, we need to use a json serializer, to encode the incoming data into a json string. Also, we declared the return content type to be json string, we Need to use a json deserializer to parse the response.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">predictor</span><span class="o">.</span><span class="n">serializer</span> <span class="o">=</span> <span class="n">sagemaker</span><span class="o">.</span><span class="n">serializers</span><span class="o">.</span><span class="n">JSONSerializer</span><span class="p">()</span>
<span class="n">predictor</span><span class="o">.</span><span class="n">deserializer</span> <span class="o">=</span> <span class="n">sagemaker</span><span class="o">.</span><span class="n">deserializers</span><span class="o">.</span><span class="n">JSONDeserializer</span><span class="p">()</span>
</pre></div>
</div>
</div>
<p>Using a list of sentences, now SageMaker endpoint is invoked to get predictions.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%%time</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">predictor</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span>
<span class="p">[</span>
<span class="s2">"Never allow the same bug to bite you twice."</span><span class="p">,</span>
<span class="s2">"The best part of Amazon SageMaker is that it makes machine learning easy."</span><span class="p">,</span>
<span class="p">]</span>
<span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">result</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%%time</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">predictor</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span>
<span class="p">[</span>
<span class="s2">"The company HuggingFace is based in New York City"</span><span class="p">,</span>
<span class="s2">"HuggingFace's headquarters are situated in Manhattan"</span><span class="p">,</span>
<span class="p">]</span>
<span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">result</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Benchmarking-your-endpoint">
<h2>Benchmarking your endpoint<a class="headerlink" href="#Benchmarking-your-endpoint" title="Permalink to this headline">#</a></h2>
<p>The following cells create a load test for your endpoint. You first define some helper functions: <code class="docutils literal notranslate"><span class="pre">inference_latency</span></code> runs the endpoint request, collects cliend side latency and any errors, <code class="docutils literal notranslate"><span class="pre">random_sentence</span></code> builds random to be sent to the endpoint.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">datetime</span>
<span class="kn">import</span> <span class="nn">math</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">boto3</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
<span class="kn">from</span> <span class="nn">joblib</span> <span class="kn">import</span> <span class="n">Parallel</span><span class="p">,</span> <span class="n">delayed</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">from</span> <span class="nn">tqdm</span> <span class="kn">import</span> <span class="n">tqdm</span>
<span class="kn">import</span> <span class="nn">random</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">inference_latency</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="o">*</span><span class="n">inputs</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""</span>
<span class="sd"> infetence_time is a simple method to return the latency of a model inference.</span>
<span class="sd"> Parameters:</span>
<span class="sd"> model: torch model onbject loaded using torch.jit.load</span>
<span class="sd"> inputs: model() args</span>
<span class="sd"> Returns:</span>
<span class="sd"> latency in seconds</span>
<span class="sd"> """</span>
<span class="n">error</span> <span class="o">=</span> <span class="kc">False</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="k">try</span><span class="p">:</span>
<span class="n">results</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="o">*</span><span class="n">inputs</span><span class="p">)</span>
<span class="k">except</span><span class="p">:</span>
<span class="n">error</span> <span class="o">=</span> <span class="kc">True</span>
<span class="n">results</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">return</span> <span class="p">{</span><span class="s1">'latency'</span><span class="p">:</span><span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">start</span><span class="p">,</span> <span class="s1">'error'</span><span class="p">:</span> <span class="n">error</span><span class="p">,</span> <span class="s1">'result'</span><span class="p">:</span> <span class="n">results</span><span class="p">}</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">random_sentence</span><span class="p">():</span>
<span class="n">s_nouns</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"A dude"</span><span class="p">,</span> <span class="s2">"My mom"</span><span class="p">,</span> <span class="s2">"The king"</span><span class="p">,</span> <span class="s2">"Some guy"</span><span class="p">,</span> <span class="s2">"A cat with rabies"</span><span class="p">,</span> <span class="s2">"A sloth"</span><span class="p">,</span> <span class="s2">"Your homie"</span><span class="p">,</span> <span class="s2">"This cool guy my gardener met yesterday"</span><span class="p">,</span> <span class="s2">"Superman"</span><span class="p">]</span>
<span class="n">p_nouns</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"These dudes"</span><span class="p">,</span> <span class="s2">"Both of my moms"</span><span class="p">,</span> <span class="s2">"All the kings of the world"</span><span class="p">,</span> <span class="s2">"Some guys"</span><span class="p">,</span> <span class="s2">"All of a cattery's cats"</span><span class="p">,</span> <span class="s2">"The multitude of sloths living under your bed"</span><span class="p">,</span> <span class="s2">"Your homies"</span><span class="p">,</span> <span class="s2">"Like, these, like, all these people"</span><span class="p">,</span> <span class="s2">"Supermen"</span><span class="p">]</span>
<span class="n">s_verbs</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"eats"</span><span class="p">,</span> <span class="s2">"kicks"</span><span class="p">,</span> <span class="s2">"gives"</span><span class="p">,</span> <span class="s2">"treats"</span><span class="p">,</span> <span class="s2">"meets with"</span><span class="p">,</span> <span class="s2">"creates"</span><span class="p">,</span> <span class="s2">"hacks"</span><span class="p">,</span> <span class="s2">"configures"</span><span class="p">,</span> <span class="s2">"spies on"</span><span class="p">,</span> <span class="s2">"retards"</span><span class="p">,</span> <span class="s2">"meows on"</span><span class="p">,</span> <span class="s2">"flees from"</span><span class="p">,</span> <span class="s2">"tries to automate"</span><span class="p">,</span> <span class="s2">"explodes"</span><span class="p">]</span>
<span class="n">p_verbs</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"eat"</span><span class="p">,</span> <span class="s2">"kick"</span><span class="p">,</span> <span class="s2">"give"</span><span class="p">,</span> <span class="s2">"treat"</span><span class="p">,</span> <span class="s2">"meet with"</span><span class="p">,</span> <span class="s2">"create"</span><span class="p">,</span> <span class="s2">"hack"</span><span class="p">,</span> <span class="s2">"configure"</span><span class="p">,</span> <span class="s2">"spy on"</span><span class="p">,</span> <span class="s2">"retard"</span><span class="p">,</span> <span class="s2">"meow on"</span><span class="p">,</span> <span class="s2">"flee from"</span><span class="p">,</span> <span class="s2">"try to automate"</span><span class="p">,</span> <span class="s2">"explode"</span><span class="p">]</span>
<span class="n">infinitives</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"to make a pie."</span><span class="p">,</span> <span class="s2">"for no apparent reason."</span><span class="p">,</span> <span class="s2">"because the sky is green."</span><span class="p">,</span> <span class="s2">"for a disease."</span><span class="p">,</span> <span class="s2">"to be able to make toast explode."</span><span class="p">,</span> <span class="s2">"to know more about archeology."</span><span class="p">]</span>
<span class="k">return</span> <span class="p">(</span><span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">s_nouns</span><span class="p">)</span> <span class="o">+</span> <span class="s1">' '</span> <span class="o">+</span> <span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">s_verbs</span><span class="p">)</span> <span class="o">+</span> <span class="s1">' '</span> <span class="o">+</span> <span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">s_nouns</span><span class="p">)</span><span class="o">.</span><span class="n">lower</span><span class="p">()</span> <span class="ow">or</span> <span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">p_nouns</span><span class="p">)</span><span class="o">.</span><span class="n">lower</span><span class="p">()</span> <span class="o">+</span> <span class="s1">' '</span> <span class="o">+</span> <span class="n">random</span><span class="o">.</span><span class="n">choice</span><span class="p">(</span><span class="n">infinitives</span><span class="p">))</span>
<span class="nb">print</span><span class="p">([</span><span class="n">random_sentence</span><span class="p">(),</span> <span class="n">random_sentence</span><span class="p">()])</span>
</pre></div>
</div>
</div>
<p>The following cell creates <code class="docutils literal notranslate"><span class="pre">number_of_clients</span></code> concurrent threads to run <code class="docutils literal notranslate"><span class="pre">number_of_runs</span></code> requests. Once completed, a <code class="docutils literal notranslate"><span class="pre">boto3</span></code> CloudWatch client will query for the server side latency metrics for comparison.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Defining Auxiliary variables</span>
<span class="n">number_of_clients</span> <span class="o">=</span> <span class="mi">2</span>
<span class="n">number_of_runs</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="n">t</span> <span class="o">=</span> <span class="n">tqdm</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="n">number_of_runs</span><span class="p">),</span><span class="n">position</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">leave</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="c1"># Starting parallel clients</span>
<span class="n">cw_start</span> <span class="o">=</span> <span class="n">datetime</span><span class="o">.</span><span class="n">datetime</span><span class="o">.</span><span class="n">utcnow</span><span class="p">()</span>
<span class="n">results</span> <span class="o">=</span> <span class="n">Parallel</span><span class="p">(</span><span class="n">n_jobs</span><span class="o">=</span><span class="n">number_of_clients</span><span class="p">,</span><span class="n">prefer</span><span class="o">=</span><span class="s2">"threads"</span><span class="p">)(</span><span class="n">delayed</span><span class="p">(</span><span class="n">inference_latency</span><span class="p">)(</span><span class="n">predictor</span><span class="o">.</span><span class="n">predict</span><span class="p">,[</span><span class="n">random_sentence</span><span class="p">(),</span> <span class="n">random_sentence</span><span class="p">()])</span> <span class="k">for</span> <span class="n">mod</span> <span class="ow">in</span> <span class="n">t</span><span class="p">)</span>
<span class="n">avg_throughput</span> <span class="o">=</span> <span class="n">t</span><span class="o">.</span><span class="n">total</span><span class="o">/</span><span class="n">t</span><span class="o">.</span><span class="n">format_dict</span><span class="p">[</span><span class="s1">'elapsed'</span><span class="p">]</span>
<span class="n">cw_end</span> <span class="o">=</span> <span class="n">datetime</span><span class="o">.</span><span class="n">datetime</span><span class="o">.</span><span class="n">utcnow</span><span class="p">()</span>
<span class="c1"># Computing metrics and print</span>
<span class="n">latencies</span> <span class="o">=</span> <span class="p">[</span><span class="n">res</span><span class="p">[</span><span class="s1">'latency'</span><span class="p">]</span> <span class="k">for</span> <span class="n">res</span> <span class="ow">in</span> <span class="n">results</span><span class="p">]</span>
<span class="n">errors</span> <span class="o">=</span> <span class="p">[</span><span class="n">res</span><span class="p">[</span><span class="s1">'error'</span><span class="p">]</span> <span class="k">for</span> <span class="n">res</span> <span class="ow">in</span> <span class="n">results</span><span class="p">]</span>
<span class="n">error_p</span> <span class="o">=</span> <span class="nb">sum</span><span class="p">(</span><span class="n">errors</span><span class="p">)</span><span class="o">/</span><span class="nb">len</span><span class="p">(</span><span class="n">errors</span><span class="p">)</span> <span class="o">*</span><span class="mi">100</span>
<span class="n">p50</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latencies</span><span class="p">[</span><span class="o">-</span><span class="mi">1000</span><span class="p">:],</span><span class="mf">0.50</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">p90</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latencies</span><span class="p">[</span><span class="o">-</span><span class="mi">1000</span><span class="p">:],</span><span class="mf">0.95</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">p95</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latencies</span><span class="p">[</span><span class="o">-</span><span class="mi">1000</span><span class="p">:],</span><span class="mf">0.99</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Avg Throughput: :</span><span class="si">{</span><span class="n">avg_throughput</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="se">\n</span><span class="s1">'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'50th Percentile Latency:</span><span class="si">{</span><span class="n">p50</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'90th Percentile Latency:</span><span class="si">{</span><span class="n">p90</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'95th Percentile Latency:</span><span class="si">{</span><span class="n">p95</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms</span><span class="se">\n</span><span class="s1">'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Errors percentage: </span><span class="si">{</span><span class="n">error_p</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> %</span><span class="se">\n</span><span class="s1">'</span><span class="p">)</span>
<span class="c1"># Querying CloudWatch</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Getting Cloudwatch:'</span><span class="p">)</span>
<span class="n">cloudwatch</span> <span class="o">=</span> <span class="n">boto3</span><span class="o">.</span><span class="n">client</span><span class="p">(</span><span class="s1">'cloudwatch'</span><span class="p">)</span>
<span class="n">statistics</span><span class="o">=</span><span class="p">[</span><span class="s1">'SampleCount'</span><span class="p">,</span> <span class="s1">'Average'</span><span class="p">,</span> <span class="s1">'Minimum'</span><span class="p">,</span> <span class="s1">'Maximum'</span><span class="p">]</span>
<span class="n">extended</span><span class="o">=</span><span class="p">[</span><span class="s1">'p50'</span><span class="p">,</span> <span class="s1">'p90'</span><span class="p">,</span> <span class="s1">'p95'</span><span class="p">,</span> <span class="s1">'p100'</span><span class="p">]</span>
<span class="c1"># Give 5 minute buffer to end</span>
<span class="n">cw_end</span> <span class="o">+=</span> <span class="n">datetime</span><span class="o">.</span><span class="n">timedelta</span><span class="p">(</span><span class="n">minutes</span><span class="o">=</span><span class="mi">5</span><span class="p">)</span>
<span class="c1"># Period must be 1, 5, 10, 30, or multiple of 60</span>
<span class="c1"># Calculate closest multiple of 60 to the total elapsed time</span>
<span class="n">factor</span> <span class="o">=</span> <span class="n">math</span><span class="o">.</span><span class="n">ceil</span><span class="p">((</span><span class="n">cw_end</span> <span class="o">-</span> <span class="n">cw_start</span><span class="p">)</span><span class="o">.</span><span class="n">total_seconds</span><span class="p">()</span> <span class="o">/</span> <span class="mi">60</span><span class="p">)</span>
<span class="n">period</span> <span class="o">=</span> <span class="n">factor</span> <span class="o">*</span> <span class="mi">60</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Time elapsed: </span><span class="si">{}</span><span class="s1"> seconds'</span><span class="o">.</span><span class="n">format</span><span class="p">((</span><span class="n">cw_end</span> <span class="o">-</span> <span class="n">cw_start</span><span class="p">)</span><span class="o">.</span><span class="n">total_seconds</span><span class="p">()))</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Using period of </span><span class="si">{}</span><span class="s1"> seconds</span><span class="se">\n</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">period</span><span class="p">))</span>
<span class="n">cloudwatch_ready</span> <span class="o">=</span> <span class="kc">False</span>
<span class="c1"># Keep polling CloudWatch metrics until datapoints are available</span>
<span class="k">while</span> <span class="ow">not</span> <span class="n">cloudwatch_ready</span><span class="p">:</span>
<span class="n">time</span><span class="o">.</span><span class="n">sleep</span><span class="p">(</span><span class="mi">30</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Waiting 30 seconds ...'</span><span class="p">)</span>
<span class="c1"># Must use default units of microseconds</span>
<span class="n">model_latency_metrics</span> <span class="o">=</span> <span class="n">cloudwatch</span><span class="o">.</span><span class="n">get_metric_statistics</span><span class="p">(</span><span class="n">MetricName</span><span class="o">=</span><span class="s1">'ModelLatency'</span><span class="p">,</span>
<span class="n">Dimensions</span><span class="o">=</span><span class="p">[{</span><span class="s1">'Name'</span><span class="p">:</span> <span class="s1">'EndpointName'</span><span class="p">,</span>
<span class="s1">'Value'</span><span class="p">:</span> <span class="n">predictor</span><span class="o">.</span><span class="n">endpoint_name</span><span class="p">},</span>
<span class="p">{</span><span class="s1">'Name'</span><span class="p">:</span> <span class="s1">'VariantName'</span><span class="p">,</span>
<span class="s1">'Value'</span><span class="p">:</span> <span class="s2">"AllTraffic"</span><span class="p">}],</span>
<span class="n">Namespace</span><span class="o">=</span><span class="s2">"AWS/SageMaker"</span><span class="p">,</span>
<span class="n">StartTime</span><span class="o">=</span><span class="n">cw_start</span><span class="p">,</span>
<span class="n">EndTime</span><span class="o">=</span><span class="n">cw_end</span><span class="p">,</span>
<span class="n">Period</span><span class="o">=</span><span class="n">period</span><span class="p">,</span>
<span class="n">Statistics</span><span class="o">=</span><span class="n">statistics</span><span class="p">,</span>
<span class="n">ExtendedStatistics</span><span class="o">=</span><span class="n">extended</span>
<span class="p">)</span>
<span class="c1"># Should be 1000</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">model_latency_metrics</span><span class="p">[</span><span class="s1">'Datapoints'</span><span class="p">])</span> <span class="o">></span> <span class="mi">0</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'</span><span class="si">{}</span><span class="s1"> latency datapoints ready'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">model_latency_metrics</span><span class="p">[</span><span class="s1">'Datapoints'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="s1">'SampleCount'</span><span class="p">]))</span>
<span class="n">side_avg</span> <span class="o">=</span> <span class="n">model_latency_metrics</span><span class="p">[</span><span class="s1">'Datapoints'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="s1">'Average'</span><span class="p">]</span> <span class="o">/</span> <span class="n">number_of_runs</span>
<span class="n">side_p50</span> <span class="o">=</span> <span class="n">model_latency_metrics</span><span class="p">[</span><span class="s1">'Datapoints'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="s1">'ExtendedStatistics'</span><span class="p">][</span><span class="s1">'p50'</span><span class="p">]</span> <span class="o">/</span> <span class="n">number_of_runs</span>
<span class="n">side_p90</span> <span class="o">=</span> <span class="n">model_latency_metrics</span><span class="p">[</span><span class="s1">'Datapoints'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="s1">'ExtendedStatistics'</span><span class="p">][</span><span class="s1">'p90'</span><span class="p">]</span> <span class="o">/</span> <span class="n">number_of_runs</span>
<span class="n">side_p95</span> <span class="o">=</span> <span class="n">model_latency_metrics</span><span class="p">[</span><span class="s1">'Datapoints'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="s1">'ExtendedStatistics'</span><span class="p">][</span><span class="s1">'p95'</span><span class="p">]</span> <span class="o">/</span> <span class="n">number_of_runs</span>
<span class="n">side_p100</span> <span class="o">=</span> <span class="n">model_latency_metrics</span><span class="p">[</span><span class="s1">'Datapoints'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="s1">'ExtendedStatistics'</span><span class="p">][</span><span class="s1">'p100'</span><span class="p">]</span> <span class="o">/</span> <span class="n">number_of_runs</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'50th Percentile Latency:</span><span class="si">{</span><span class="n">side_p50</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'90th Percentile Latency:</span><span class="si">{</span><span class="n">side_p90</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'95th Percentile Latency:</span><span class="si">{</span><span class="n">side_p95</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms</span><span class="se">\n</span><span class="s1">'</span><span class="p">)</span>
<span class="n">cloudwatch_ready</span> <span class="o">=</span> <span class="kc">True</span>
<br><br><br></pre></div>
</div>
</div>
<div class="section" id="Cleanup">
<h3>Cleanup<a class="headerlink" href="#Cleanup" title="Permalink to this headline">#</a></h3>
<p>Endpoints should be deleted when no longer in use, to avoid costs.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">predictor</span><span class="o">.</span><span class="n">delete_endpoint</span><span class="p">(</span><span class="n">predictor</span><span class="o">.</span><span class="n">endpoint</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../bert_tutorial/tutorial_pretrained_bert.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Compiling and Deploying HuggingFace Pretrained BERT</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">BERT TorchServe Tutorial</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:46.111Z
|
Using NeuronCore Pipeline with PyTorch — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html
|
# Using NeuronCore Pipeline with PyTorch — AWS Neuron Documentation
## Using NeuronCore Pipeline with PyTorch[#](#Using-NeuronCore-Pipeline-with-PyTorch "Permalink to this headline")
In this tutorial you compile a pretrained BERT base model from HuggingFace 🤗 Transformers, using the NeuronCore Pipeline feature of the AWS Neuron SDK. You benchmark model latency of the pipeline parallel mode and compare with the usual data parallel (multi-worker) deployment.
This tutorial is intended to run in an inf1.6xlarge, running the latest AWS Deep Learning AMI (DLAMI). The inf1.6xlarge instance size has AWS Inferentia chips for a total of 16 NeuronCores.
Verify that this Jupyter notebook is running the Python or Conda kernel environment that was set up according to the [PyTorch Installation Guide](../../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.
> **Note:** Do not execute this tutorial using “Run -> Run all cells” option.
## Install Dependencies:[#](#Install-Dependencies: "Permalink to this headline")
This tutorial requires the following pip packages:
- `torch-neuron`
- `neuron-cc[tensorflow]`
- `transformers`
Most of these packages will be installed when configuring your environment using the Neuron PyTorch setup guide. The additional HuggingFace 🤗 Transformers dependency must be installed here.
```
%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
!pip install --upgrade "transformers==4.6.0"
```
## Compiling a BERT base model for a single NeuronCore[#](#Compiling-a-BERT-base-model-for-a-single-NeuronCore "Permalink to this headline")
To run a HuggingFace [BERTModel](https://huggingface.co/transformers/model_doc/bert.html#bertmodel) on Inferentia, you only need to add a single extra line of code to the usual 🤗 Transformers PyTorch implementation, after importing the torch\_neuron framework.
Add the argument `return_dict=False` to the BERT transformers model so it can be traced with [TorchScript](https://pytorch.org/docs/stable/jit.html). TorchScript is a way to create serializable and optimizable models from PyTorch code.
Enable padding to a maximum sequence length of 128, to test the model’s performance with a realistic payload size. You can adapt this sequence length to your application’s requirement.
You can adapt the original example on the [BertModel forward pass docstring](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel.forward) according to the following cell
```
import torch
import torch_neuron
from transformers import BertTokenizer, BertModel
from joblib import Parallel, delayed
import numpy as np
from tqdm import tqdm
import os
import time
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased',return_dict=False)
inputs = tokenizer("Hello, my dog is cute",return_tensors="pt",max_length=128,padding='max_length',truncation=True)
```
The one extra line required is the call to torch.neuron.trace() method. This call compiles the model and returns the forwad method of the torch `nn.Model` method, which you can use to run inference.
The compiled graph can be saved using the `torch.jit.save` function and restored using `torch.jit.load` function for inference on Inf1 instances. During inference, the previously compiled artifacts will be loaded into the Neuron Runtime for inference execution.
```
neuron_model = torch.neuron.trace(model,
example_inputs = (inputs['input_ids'],inputs['attention_mask']),
verbose=1)
```
## Running the BERT base model on a single NeuronCore[#](#Running-the-BERT-base-model-on-a-single-NeuronCore "Permalink to this headline")
With the model already available in memory, you can time one execution and check for the latency on the single inference call. You will load the model into Inferentia with a single inference call. A large “wall time” is expected when you first run the next cell, running the cell twice will show the actual inference latency:
```
%%time
# The following line tests inference and should be executed on Inf1 instance family.
outputs = neuron_model(*(inputs['input_ids'],inputs['attention_mask']))
```
You can also check for the throughput of the single model running on a single NeuronCore.
The sequential inference test (for loop) does not measure all the performance one can achieve in an instance with multiple NeuronCores. To improve hardwar utilization you can run parallel inference requests over multiple model workers, which you’ll test in the Data Parallel Bonus Section below.
```
%%time
for _ in tqdm(range(100)):
outputs = neuron_model(*(inputs['input_ids'],inputs['attention_mask']))
```
Save the compiled model for later use:
```
neuron_model.save('bert-base-uncased-neuron.pt')
```
## Compiling a BERT base model for 16 NeuronCores[#](#Compiling-a-BERT-base-model-for-16-NeuronCores "Permalink to this headline")
Our next step is to compile the same model for all 16 NeuronCores available in the inf1.6xlarge and check the performance difference when running pipeline parallel inferences..
```
import torch
import torch_neuron
from transformers import BertTokenizer, BertModel
from joblib import Parallel, delayed
import numpy as np
from tqdm import tqdm
import os
import time
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased',return_dict=False)
inputs = tokenizer("Hello, my dog is cute",return_tensors="pt",max_length=128,padding='max_length',truncation=True)
```
To enable pipeline mode during compilation, you need only to add the compiler flag `--neuroncore-pipeline-cores` and set the number of desired cores. The cell below sets up a `neuroncore_pipeline_cores` string, which you can set for the available number of NeuronCores on the instance: _inf1.6xlarge_ has 16 NeuronCores in 4 Inferentia chips.
```
# Number of Cores in the Pipeline Mode
neuroncore_pipeline_cores = 16 # This string should be '4' on an inf1.xlarge
# Compiling for neuroncore-pipeline-cores='16'
neuron_pipeline_model = torch.neuron.trace(model,
example_inputs = (inputs['input_ids'],inputs['attention_mask']),
verbose=1,
compiler_args = ['--neuroncore-pipeline-cores', str(neuroncore_pipeline_cores)]
)
```
## Running the BERT base model on 16 NeuronCores[#](#Running-the-BERT-base-model-on-16-NeuronCores "Permalink to this headline")
Next, time one execution and check for the latency on the single inference call over 16 cores. You will load the model into Inferentia with a single inference call. A large “wall time” is expected when you first run the next cell, running the cell twice will show the actual inference latency:
```
%%time
# The following line tests inference and should be executed on Inf1 instance family.
outputs = neuron_pipeline_model(*(inputs['input_ids'],inputs['attention_mask']))
```
Check also for the throughput of the single model running over a 16 NeuronCores.
The sequential inference test (for loop) does not measure all the performance one can achieve with Pipeline mode. As the inference runs in streaming fashion, at least 15 cores are waiting for a new call until the last one processes the first call. This results in low NeuronCore utilization. To improve hardware utilization you will require parallel inference requests, which you’ll test in the next section.
```
for _ in tqdm(range(100)):
outputs = neuron_pipeline_model(*(inputs['input_ids'],inputs['attention_mask']))
```
## Load Testing the Pipeline Parallel Mode[#](#Load-Testing-the-Pipeline-Parallel-Mode "Permalink to this headline")
To put the 16 NeuronCores group to test, a client has to run concurrent requests to the model. In this Notebook setup you achieve it by creating a thread pool with `Joblib.Parallel`, with all workers on the pool runing one inference call.
You can define a new method called `inference_latency()` so that you measure the amount of time each inference calls take.
```
def inference_latency(model,*inputs):
"""
infetence_time is a simple method to return the latency of a model inference.
Parameters:
model: torch model onbject loaded using torch.jit.load
inputs: model() args
Returns:
latency in seconds
"""
start = time.time()
_ = model(*inputs)
return time.time() - start
```
Use `tqdm` to measure total throughput of your experiment, with a nice side-effect of “cool progress bar!”. The total throughput is expected to be high, so set your experiment range to a large number, here 30k inferences.
To calculate the latency statistics over the returned 30k list of latencies use `numpy.qunatile()` method.
```
t = tqdm(range(30000), position=0, leave=True)
latency = Parallel(n_jobs=12,prefer="threads")(delayed(inference_latency)(neuron_pipeline_model,*(inputs['input_ids'],inputs['attention_mask'])) for i in t)
p50 = np.quantile(latency[-10000:],0.50) * 1000
p95 = np.quantile(latency[-10000:],0.95) * 1000
p99 = np.quantile(latency[-10000:],0.99) * 1000
avg_throughput = t.total/t.format_dict['elapsed']
print(f'Avg Throughput: :{avg_throughput:.1f}')
print(f'50th Percentile Latency:{p50:.1f} ms')
print(f'95th Percentile Latency:{p95:.1f} ms')
print(f'99th Percentile Latency:{p99:.1f} ms')
```
Save compile model for later use:
```
# Save the TorchScript graph
neuron_pipeline_model.save('bert-base-uncased-neuron-pipeline.pt')
```
## Bonus Section - Load Testing Data Parallel Mode[#](#Bonus-Section---Load-Testing-Data-Parallel-Mode "Permalink to this headline")
```
import torch
import torch_neuron
from transformers import BertTokenizer
from joblib import Parallel, delayed
import numpy as np
from tqdm import tqdm
import os
import time
def inference_latency(model,*inputs):
"""
infetence_time is a simple method to return the latency of a model inference.
Parameters:
model: torch model onbject loaded using torch.jit.load
inputs: model() args
Returns:
latency in seconds
"""
start = time.time()
_ = model(*inputs)
return time.time() - start
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute",return_tensors="pt",max_length=128,padding='max_length',truncation=True)
```
You use the `'NEURON_RT_NUM_CORES'` environment variable to define how many Neuron cores to be used. Set the environment variable to the number of individual workers you want to test in parallel.
`torch_neuron` will load one model per NeuronCore group until it runs out of cores. At that point, if the Python process continues to spawn more model objest using `torch.jit.load`, `torch_neuron` will start stacking more than one model per core, until the Inferentia chip memory is full.
Inferentia is able to run inference over all the loaded models, but only one at a time. The Neuron Runtime takes care of dynamically switching the model context as requests come in, no extra worker process management required. Use 1 model per NeuronCore to achieve maximum performance.
The following cell creates a list with as many models as NeuronCore Groups and execute one single dummy inference to load the models into Inferentia.
```
import warnings
# Number of data parallel workers
number_of_workers=16 # This number should be 4 on an inf1.xlarge
# Setting up a data parallel group
os.environ['NEURON_RT_NUM_CORES'] = str(number_of_workers)
# Loading 'number_of_workers' amount of models in Python memory
model_list = [torch.jit.load('bert-base-uncased-neuron.pt') for _ in range(number_of_workers)]
# Dummy inference to load models to Inferentia
_ = [mod(*(inputs['input_ids'],inputs['attention_mask'])) for mod in model_list]
```
Adapt the call to `joblib.Parallel()` iterating over a concatenated version of the `model_list`, to run ‘round-robin’ calls to each of the model workers.
```
t = tqdm(model_list*1500,position=0, leave=True)
latency = Parallel(n_jobs=number_of_workers,prefer="threads")(delayed(inference_latency)(mod,*(inputs['input_ids'],inputs['attention_mask'])) for mod in t)
p50 = np.quantile(latency[-10000:],0.50) * 1000
p95 = np.quantile(latency[-10000:],0.95) * 1000
p99 = np.quantile(latency[-10000:],0.99) * 1000
avg_throughput = t.total/t.format_dict['elapsed']
print(f'Avg Throughput: :{avg_throughput:.1f}')
print(f'50th Percentile Latency:{p50:.1f} ms')
print(f'95th Percentile Latency:{p95:.1f} ms')
print(f'99th Percentile Latency:{p99:.1f} ms')
```
For this model, despite the larger number of workers, the per-worker latency increases when running a single model per core, which in turn reduces the total throughput.
This behavior may not repeat if the model memory footprint or the input payload size changes, i.e batch size > 1. We encourage you to experiment with the data parallel and pipeline parallel modes to optimize your application performance.
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Using NeuronCore Pipeline with PyTorch — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="Additional Examples (torch-neuron)" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
<link rel="prev" title="Utilizing Neuron Capabilities Tutorials" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/pytorch/pipeline_tutorial/neuroncore_pipeline_pytorch.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies:">
Install Dependencies:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compiling-a-BERT-base-model-for-a-single-NeuronCore">
Compiling a BERT base model for a single NeuronCore
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Running-the-BERT-base-model-on-a-single-NeuronCore">
Running the BERT base model on a single NeuronCore
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compiling-a-BERT-base-model-for-16-NeuronCores">
Compiling a BERT base model for 16 NeuronCores
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Running-the-BERT-base-model-on-16-NeuronCores">
Running the BERT base model on 16 NeuronCores
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Load-Testing-the-Pipeline-Parallel-Mode">
Load Testing the Pipeline Parallel Mode
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Bonus-Section---Load-Testing-Data-Parallel-Mode">
Bonus Section - Load Testing Data Parallel Mode
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Using NeuronCore Pipeline with PyTorch</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies:">
Install Dependencies:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compiling-a-BERT-base-model-for-a-single-NeuronCore">
Compiling a BERT base model for a single NeuronCore
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Running-the-BERT-base-model-on-a-single-NeuronCore">
Running the BERT base model on a single NeuronCore
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compiling-a-BERT-base-model-for-16-NeuronCores">
Compiling a BERT base model for 16 NeuronCores
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Running-the-BERT-base-model-on-16-NeuronCores">
Running the BERT base model on 16 NeuronCores
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Load-Testing-the-Pipeline-Parallel-Mode">
Load Testing the Pipeline Parallel Mode
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Bonus-Section---Load-Testing-Data-Parallel-Mode">
Bonus Section - Load Testing Data Parallel Mode
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Using-NeuronCore-Pipeline-with-PyTorch">
<h1>Using NeuronCore Pipeline with PyTorch<a class="headerlink" href="#Using-NeuronCore-Pipeline-with-PyTorch" title="Permalink to this headline">#</a></h1>
<p>In this tutorial you compile a pretrained BERT base model from HuggingFace 🤗 Transformers, using the NeuronCore Pipeline feature of the AWS Neuron SDK. You benchmark model latency of the pipeline parallel mode and compare with the usual data parallel (multi-worker) deployment.</p>
<p>This tutorial is intended to run in an inf1.6xlarge, running the latest AWS Deep Learning AMI (DLAMI). The inf1.6xlarge instance size has AWS Inferentia chips for a total of 16 NeuronCores.</p>
<p>Verify that this Jupyter notebook is running the Python or Conda kernel environment that was set up according to the <a class="reference external" href="../../../../frameworks/torch/torch-neuron/setup/pytorch-install.html">PyTorch Installation Guide</a>. You can select the kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.</p>
<blockquote>
<div><p><strong>Note:</strong> Do not execute this tutorial using “Run -> Run all cells” option.</p>
</div></blockquote>
<div class="section" id="Install-Dependencies:">
<h2>Install Dependencies:<a class="headerlink" href="#Install-Dependencies:" title="Permalink to this headline">#</a></h2>
<p>This tutorial requires the following pip packages:</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">neuron-cc[tensorflow]</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">transformers</span></code></p></li>
</ul>
<p>Most of these packages will be installed when configuring your environment using the Neuron PyTorch setup guide. The additional HuggingFace 🤗 Transformers dependency must be installed here.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%</span><span class="k">env</span> TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
<span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span>--upgrade<span class="w"> </span><span class="s2">"transformers==4.6.0"</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Compiling-a-BERT-base-model-for-a-single-NeuronCore">
<h2>Compiling a BERT base model for a single NeuronCore<a class="headerlink" href="#Compiling-a-BERT-base-model-for-a-single-NeuronCore" title="Permalink to this headline">#</a></h2>
<p>To run a HuggingFace <a class="reference external" href="https://huggingface.co/transformers/model_doc/bert.html#bertmodel">BERTModel</a> on Inferentia, you only need to add a single extra line of code to the usual 🤗 Transformers PyTorch implementation, after importing the torch_neuron framework.</p>
<p>Add the argument <code class="docutils literal notranslate"><span class="pre">return_dict=False</span></code> to the BERT transformers model so it can be traced with <a class="reference external" href="https://pytorch.org/docs/stable/jit.html">TorchScript</a>. TorchScript is a way to create serializable and optimizable models from PyTorch code.</p>
<p>Enable padding to a maximum sequence length of 128, to test the model’s performance with a realistic payload size. You can adapt this sequence length to your application’s requirement.</p>
<p>You can adapt the original example on the <a class="reference external" href="https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel.forward">BertModel forward pass docstring</a> according to the following cell</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_neuron</span>
<span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">BertTokenizer</span><span class="p">,</span> <span class="n">BertModel</span>
<span class="kn">from</span> <span class="nn">joblib</span> <span class="kn">import</span> <span class="n">Parallel</span><span class="p">,</span> <span class="n">delayed</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">from</span> <span class="nn">tqdm</span> <span class="kn">import</span> <span class="n">tqdm</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="n">tokenizer</span> <span class="o">=</span> <span class="n">BertTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'bert-base-uncased'</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">BertModel</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'bert-base-uncased'</span><span class="p">,</span><span class="n">return_dict</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="p">(</span><span class="s2">"Hello, my dog is cute"</span><span class="p">,</span><span class="n">return_tensors</span><span class="o">=</span><span class="s2">"pt"</span><span class="p">,</span><span class="n">max_length</span><span class="o">=</span><span class="mi">128</span><span class="p">,</span><span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span><span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<br></pre></div>
</div>
</div>
<p>The one extra line required is the call to torch.neuron.trace() method. This call compiles the model and returns the forwad method of the torch <code class="docutils literal notranslate"><span class="pre">nn.Model</span></code> method, which you can use to run inference.</p>
<p>The compiled graph can be saved using the <code class="docutils literal notranslate"><span class="pre">torch.jit.save</span></code> function and restored using <code class="docutils literal notranslate"><span class="pre">torch.jit.load</span></code> function for inference on Inf1 instances. During inference, the previously compiled artifacts will be loaded into the Neuron Runtime for inference execution.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">neuron_model</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span>
<span class="n">example_inputs</span> <span class="o">=</span> <span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]),</span>
<span class="n">verbose</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<br></pre></div>
</div>
</div>
</div>
<div class="section" id="Running-the-BERT-base-model-on-a-single-NeuronCore">
<h2>Running the BERT base model on a single NeuronCore<a class="headerlink" href="#Running-the-BERT-base-model-on-a-single-NeuronCore" title="Permalink to this headline">#</a></h2>
<p>With the model already available in memory, you can time one execution and check for the latency on the single inference call. You will load the model into Inferentia with a single inference call. A large “wall time” is expected when you first run the next cell, running the cell twice will show the actual inference latency:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%%time</span>
<span class="c1"># The following line tests inference and should be executed on Inf1 instance family.</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="n">neuron_model</span><span class="p">(</span><span class="o">*</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]))</span>
</pre></div>
</div>
</div>
<p>You can also check for the throughput of the single model running on a single NeuronCore.</p>
<p>The sequential inference test (for loop) does not measure all the performance one can achieve in an instance with multiple NeuronCores. To improve hardwar utilization you can run parallel inference requests over multiple model workers, which you’ll test in the Data Parallel Bonus Section below.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%%time</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">tqdm</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">100</span><span class="p">)):</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="n">neuron_model</span><span class="p">(</span><span class="o">*</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]))</span>
</pre></div>
</div>
</div>
<p>Save the compiled model for later use:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">neuron_model</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'bert-base-uncased-neuron.pt'</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Compiling-a-BERT-base-model-for-16-NeuronCores">
<h2>Compiling a BERT base model for 16 NeuronCores<a class="headerlink" href="#Compiling-a-BERT-base-model-for-16-NeuronCores" title="Permalink to this headline">#</a></h2>
<p>Our next step is to compile the same model for all 16 NeuronCores available in the inf1.6xlarge and check the performance difference when running pipeline parallel inferences..</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_neuron</span>
<span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">BertTokenizer</span><span class="p">,</span> <span class="n">BertModel</span>
<span class="kn">from</span> <span class="nn">joblib</span> <span class="kn">import</span> <span class="n">Parallel</span><span class="p">,</span> <span class="n">delayed</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">from</span> <span class="nn">tqdm</span> <span class="kn">import</span> <span class="n">tqdm</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="n">tokenizer</span> <span class="o">=</span> <span class="n">BertTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'bert-base-uncased'</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">BertModel</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'bert-base-uncased'</span><span class="p">,</span><span class="n">return_dict</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="p">(</span><span class="s2">"Hello, my dog is cute"</span><span class="p">,</span><span class="n">return_tensors</span><span class="o">=</span><span class="s2">"pt"</span><span class="p">,</span><span class="n">max_length</span><span class="o">=</span><span class="mi">128</span><span class="p">,</span><span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span><span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<br></pre></div>
</div>
</div>
<p>To enable pipeline mode during compilation, you need only to add the compiler flag <code class="docutils literal notranslate"><span class="pre">--neuroncore-pipeline-cores</span></code> and set the number of desired cores. The cell below sets up a <code class="docutils literal notranslate"><span class="pre">neuroncore_pipeline_cores</span></code> string, which you can set for the available number of NeuronCores on the instance: <em>inf1.6xlarge</em> has 16 NeuronCores in 4 Inferentia chips.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Number of Cores in the Pipeline Mode</span>
<span class="n">neuroncore_pipeline_cores</span> <span class="o">=</span> <span class="mi">16</span> <span class="c1"># This string should be '4' on an inf1.xlarge</span>
<span class="c1"># Compiling for neuroncore-pipeline-cores='16'</span>
<span class="n">neuron_pipeline_model</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span>
<span class="n">example_inputs</span> <span class="o">=</span> <span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]),</span>
<span class="n">verbose</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="n">compiler_args</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'--neuroncore-pipeline-cores'</span><span class="p">,</span> <span class="nb">str</span><span class="p">(</span><span class="n">neuroncore_pipeline_cores</span><span class="p">)]</span>
<span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Running-the-BERT-base-model-on-16-NeuronCores">
<h2>Running the BERT base model on 16 NeuronCores<a class="headerlink" href="#Running-the-BERT-base-model-on-16-NeuronCores" title="Permalink to this headline">#</a></h2>
<p>Next, time one execution and check for the latency on the single inference call over 16 cores. You will load the model into Inferentia with a single inference call. A large “wall time” is expected when you first run the next cell, running the cell twice will show the actual inference latency:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%%time</span>
<span class="c1"># The following line tests inference and should be executed on Inf1 instance family.</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="n">neuron_pipeline_model</span><span class="p">(</span><span class="o">*</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]))</span>
</pre></div>
</div>
</div>
<p>Check also for the throughput of the single model running over a 16 NeuronCores.</p>
<p>The sequential inference test (for loop) does not measure all the performance one can achieve with Pipeline mode. As the inference runs in streaming fashion, at least 15 cores are waiting for a new call until the last one processes the first call. This results in low NeuronCore utilization. To improve hardware utilization you will require parallel inference requests, which you’ll test in the next section.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">tqdm</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">100</span><span class="p">)):</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="n">neuron_pipeline_model</span><span class="p">(</span><span class="o">*</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]))</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Load-Testing-the-Pipeline-Parallel-Mode">
<h2>Load Testing the Pipeline Parallel Mode<a class="headerlink" href="#Load-Testing-the-Pipeline-Parallel-Mode" title="Permalink to this headline">#</a></h2>
<p>To put the 16 NeuronCores group to test, a client has to run concurrent requests to the model. In this Notebook setup you achieve it by creating a thread pool with <code class="docutils literal notranslate"><span class="pre">Joblib.Parallel</span></code>, with all workers on the pool runing one inference call.</p>
<p>You can define a new method called <code class="docutils literal notranslate"><span class="pre">inference_latency()</span></code> so that you measure the amount of time each inference calls take.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">inference_latency</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="o">*</span><span class="n">inputs</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""</span>
<span class="sd"> infetence_time is a simple method to return the latency of a model inference.</span>
<span class="sd"> Parameters:</span>
<span class="sd"> model: torch model onbject loaded using torch.jit.load</span>
<span class="sd"> inputs: model() args</span>
<span class="sd"> Returns:</span>
<span class="sd"> latency in seconds</span>
<span class="sd"> """</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">_</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="o">*</span><span class="n">inputs</span><span class="p">)</span>
<span class="k">return</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">start</span>
</pre></div>
</div>
</div>
<p>Use <code class="docutils literal notranslate"><span class="pre">tqdm</span></code> to measure total throughput of your experiment, with a nice side-effect of “cool progress bar!”. The total throughput is expected to be high, so set your experiment range to a large number, here 30k inferences.</p>
<p>To calculate the latency statistics over the returned 30k list of latencies use <code class="docutils literal notranslate"><span class="pre">numpy.qunatile()</span></code> method.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">t</span> <span class="o">=</span> <span class="n">tqdm</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">30000</span><span class="p">),</span> <span class="n">position</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">leave</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">latency</span> <span class="o">=</span> <span class="n">Parallel</span><span class="p">(</span><span class="n">n_jobs</span><span class="o">=</span><span class="mi">12</span><span class="p">,</span><span class="n">prefer</span><span class="o">=</span><span class="s2">"threads"</span><span class="p">)(</span><span class="n">delayed</span><span class="p">(</span><span class="n">inference_latency</span><span class="p">)(</span><span class="n">neuron_pipeline_model</span><span class="p">,</span><span class="o">*</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]))</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">t</span><span class="p">)</span>
<span class="n">p50</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latency</span><span class="p">[</span><span class="o">-</span><span class="mi">10000</span><span class="p">:],</span><span class="mf">0.50</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">p95</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latency</span><span class="p">[</span><span class="o">-</span><span class="mi">10000</span><span class="p">:],</span><span class="mf">0.95</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">p99</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latency</span><span class="p">[</span><span class="o">-</span><span class="mi">10000</span><span class="p">:],</span><span class="mf">0.99</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">avg_throughput</span> <span class="o">=</span> <span class="n">t</span><span class="o">.</span><span class="n">total</span><span class="o">/</span><span class="n">t</span><span class="o">.</span><span class="n">format_dict</span><span class="p">[</span><span class="s1">'elapsed'</span><span class="p">]</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Avg Throughput: :</span><span class="si">{</span><span class="n">avg_throughput</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1">'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'50th Percentile Latency:</span><span class="si">{</span><span class="n">p50</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'95th Percentile Latency:</span><span class="si">{</span><span class="n">p95</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'99th Percentile Latency:</span><span class="si">{</span><span class="n">p99</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Save compile model for later use:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Save the TorchScript graph</span>
<span class="n">neuron_pipeline_model</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'bert-base-uncased-neuron-pipeline.pt'</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Bonus-Section---Load-Testing-Data-Parallel-Mode">
<h2>Bonus Section - Load Testing Data Parallel Mode<a class="headerlink" href="#Bonus-Section---Load-Testing-Data-Parallel-Mode" title="Permalink to this headline">#</a></h2>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_neuron</span>
<span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">BertTokenizer</span>
<span class="kn">from</span> <span class="nn">joblib</span> <span class="kn">import</span> <span class="n">Parallel</span><span class="p">,</span> <span class="n">delayed</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">from</span> <span class="nn">tqdm</span> <span class="kn">import</span> <span class="n">tqdm</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="k">def</span> <span class="nf">inference_latency</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="o">*</span><span class="n">inputs</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""</span>
<span class="sd"> infetence_time is a simple method to return the latency of a model inference.</span>
<span class="sd"> Parameters:</span>
<span class="sd"> model: torch model onbject loaded using torch.jit.load</span>
<span class="sd"> inputs: model() args</span>
<span class="sd"> Returns:</span>
<span class="sd"> latency in seconds</span>
<span class="sd"> """</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">_</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="o">*</span><span class="n">inputs</span><span class="p">)</span>
<span class="k">return</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">start</span>
<span class="n">tokenizer</span> <span class="o">=</span> <span class="n">BertTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'bert-base-uncased'</span><span class="p">)</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="p">(</span><span class="s2">"Hello, my dog is cute"</span><span class="p">,</span><span class="n">return_tensors</span><span class="o">=</span><span class="s2">"pt"</span><span class="p">,</span><span class="n">max_length</span><span class="o">=</span><span class="mi">128</span><span class="p">,</span><span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span><span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<br></pre></div>
</div>
</div>
<p>You use the <code class="docutils literal notranslate"><span class="pre">'NEURON_RT_NUM_CORES'</span></code> environment variable to define how many Neuron cores to be used. Set the environment variable to the number of individual workers you want to test in parallel.</p>
<p><code class="docutils literal notranslate"><span class="pre">torch_neuron</span></code> will load one model per NeuronCore group until it runs out of cores. At that point, if the Python process continues to spawn more model objest using <code class="docutils literal notranslate"><span class="pre">torch.jit.load</span></code>, <code class="docutils literal notranslate"><span class="pre">torch_neuron</span></code> will start stacking more than one model per core, until the Inferentia chip memory is full.</p>
<p>Inferentia is able to run inference over all the loaded models, but only one at a time. The Neuron Runtime takes care of dynamically switching the model context as requests come in, no extra worker process management required. Use 1 model per NeuronCore to achieve maximum performance.</p>
<p>The following cell creates a list with as many models as NeuronCore Groups and execute one single dummy inference to load the models into Inferentia.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">warnings</span>
<span class="c1"># Number of data parallel workers</span>
<span class="n">number_of_workers</span><span class="o">=</span><span class="mi">16</span> <span class="c1"># This number should be 4 on an inf1.xlarge</span>
<span class="c1"># Setting up a data parallel group</span>
<span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s1">'NEURON_RT_NUM_CORES'</span><span class="p">]</span> <span class="o">=</span> <span class="nb">str</span><span class="p">(</span><span class="n">number_of_workers</span><span class="p">)</span>
<span class="c1"># Loading 'number_of_workers' amount of models in Python memory</span>
<span class="n">model_list</span> <span class="o">=</span> <span class="p">[</span><span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="s1">'bert-base-uncased-neuron.pt'</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">number_of_workers</span><span class="p">)]</span>
<span class="c1"># Dummy inference to load models to Inferentia</span>
<span class="n">_</span> <span class="o">=</span> <span class="p">[</span><span class="n">mod</span><span class="p">(</span><span class="o">*</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]))</span> <span class="k">for</span> <span class="n">mod</span> <span class="ow">in</span> <span class="n">model_list</span><span class="p">]</span>
<br></pre></div>
</div>
</div>
<p>Adapt the call to <code class="docutils literal notranslate"><span class="pre">joblib.Parallel()</span></code> iterating over a concatenated version of the <code class="docutils literal notranslate"><span class="pre">model_list</span></code>, to run ‘round-robin’ calls to each of the model workers.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">t</span> <span class="o">=</span> <span class="n">tqdm</span><span class="p">(</span><span class="n">model_list</span><span class="o">*</span><span class="mi">1500</span><span class="p">,</span><span class="n">position</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">leave</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">latency</span> <span class="o">=</span> <span class="n">Parallel</span><span class="p">(</span><span class="n">n_jobs</span><span class="o">=</span><span class="n">number_of_workers</span><span class="p">,</span><span class="n">prefer</span><span class="o">=</span><span class="s2">"threads"</span><span class="p">)(</span><span class="n">delayed</span><span class="p">(</span><span class="n">inference_latency</span><span class="p">)(</span><span class="n">mod</span><span class="p">,</span><span class="o">*</span><span class="p">(</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]))</span> <span class="k">for</span> <span class="n">mod</span> <span class="ow">in</span> <span class="n">t</span><span class="p">)</span>
<span class="n">p50</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latency</span><span class="p">[</span><span class="o">-</span><span class="mi">10000</span><span class="p">:],</span><span class="mf">0.50</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">p95</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latency</span><span class="p">[</span><span class="o">-</span><span class="mi">10000</span><span class="p">:],</span><span class="mf">0.95</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">p99</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">quantile</span><span class="p">(</span><span class="n">latency</span><span class="p">[</span><span class="o">-</span><span class="mi">10000</span><span class="p">:],</span><span class="mf">0.99</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span>
<span class="n">avg_throughput</span> <span class="o">=</span> <span class="n">t</span><span class="o">.</span><span class="n">total</span><span class="o">/</span><span class="n">t</span><span class="o">.</span><span class="n">format_dict</span><span class="p">[</span><span class="s1">'elapsed'</span><span class="p">]</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Avg Throughput: :</span><span class="si">{</span><span class="n">avg_throughput</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1">'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'50th Percentile Latency:</span><span class="si">{</span><span class="n">p50</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'95th Percentile Latency:</span><span class="si">{</span><span class="n">p95</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'99th Percentile Latency:</span><span class="si">{</span><span class="n">p99</span><span class="si">:</span><span class="s1">.1f</span><span class="si">}</span><span class="s1"> ms'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>For this model, despite the larger number of workers, the per-worker latency increases when running a single model per core, which in turn reduces the total throughput.</p>
<p>This behavior may not repeat if the model memory footprint or the input payload size changes, i.e batch size > 1. We encourage you to experiment with the data parallel and pipeline parallel modes to optimize your application performance.</p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Utilizing Neuron Capabilities Tutorials</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Additional Examples (<code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code>)</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:46.381Z
|
Transformers MarianMT Tutorial — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/pytorch/transformers-marianmt.html
|
# Transformers MarianMT Tutorial — AWS Neuron Documentation
## Transformers MarianMT Tutorial[#](#Transformers-MarianMT-Tutorial "Permalink to this headline")
In this tutorial, you will deploy the [HuggingFace MarianMT](https://huggingface.co/transformers/v4.0.1/model_doc/marian.html) model for text translation.
This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.
Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [PyTorch Installation Guide](../../../frameworks/torch/torch-neuron/setup/pytorch-install.html). You can select the kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.
To generate text, you will be using the beam search algorithm to incrementally generate token candidates until the full output text has been created. Unlike simple single-pass models, this algorithm divides the work into two distinct phases:
- **Encoder**: Convert the input text into an encoded representation. (Executed once)
- **Decoder**: Use the encoded representation of the input text and the current output tokens to incrementally generate the set of next best candidate tokens. (Executed many times)
In this tutorial you will perform the following steps:
- **Compile**: Compile both the Encoder and Decoder for Neuron using simplified interfaces for inference.
- **Infer**: Run on CPU and Neuron and compare results.
Finally, a completely unrolled decoder will be built which simplifies the implementation at the cost of performing fixed-length inferences.
## Install Dependencies:[#](#Install-Dependencies: "Permalink to this headline")
This tutorial has the following dependencies:
- `transformers==4.25.1`
- `torch-neuron`
- `sentencepiece`
- `neuron-cc[tensorflow]`
The following will install the required `transformers` version. Note that encoder/decoder API changes across different minor versions requires that you are specific about the version used. Also note that the `torch-neuron` version is pinned due to `transformer` compatibility issues.
```
!pip install sentencepiece transformers==4.26.1
```
## Parameters[#](#Parameters "Permalink to this headline")
The parameters of a generative model can be tuned for different use-cases. In this example, you’ll tailor the parameters to a single inference beam search for an on-demand inference use-case. See the [MarianConfig](https://huggingface.co/transformers/v4.0.1/model_doc/marian.html#marianconfig) for parameter details.
Rather than varying the encoder/decoder token sizes at runtime, you must define these parameters prior to compilation. The encoder/decoder token sizes are important tunable parameters as a large token sequence will offer greater sentence length flexibility but perform worse than a small token sequence.
To maximize performance on Neuron, the `num_beams`, `max_encode_length` and `max_decoder_length` should be made as small as possible for the use-case.
For this tutorial you will use a model that translates sentences of up to 32 token from English to German.
```
%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
model_name = "Helsinki-NLP/opus-mt-en-de" # English -> German model
num_texts = 1 # Number of input texts to decode
num_beams = 4 # Number of beams per input text
max_encoder_length = 32 # Maximum input token length
max_decoder_length = 32 # Maximum output token length
```
## CPU Model Inference[#](#CPU-Model-Inference "Permalink to this headline")
Start by executing the model on CPU to test its execution.
The following defines the inference function which will be used to compare the Neuron and CPU output. In this example you will display all beam search sequences that were generated. For a real on-demand use case, set the `num_beams` to `1` to return only the top result.
```
def infer(model, tokenizer, text):
# Truncate and pad the max length to ensure that the token size is compatible with fixed-sized encoder (Not necessary for pure CPU execution)
batch = tokenizer(text, max_length=max_decoder_length, truncation=True, padding='max_length', return_tensors="pt")
output = model.generate(**batch, max_length=max_decoder_length, num_beams=num_beams, num_return_sequences=num_beams)
results = [tokenizer.decode(t, skip_special_tokens=True) for t in output]
print('Texts:')
for i, summary in enumerate(results):
print(i + 1, summary)
```
Note that after loading the model, we also set the maximum length. This will later be used to limit the size of the compiled model.
```
from transformers import MarianMTModel, MarianTokenizer
model_cpu = MarianMTModel.from_pretrained(model_name)
model_cpu.config.max_length = max_decoder_length
model_cpu.eval()
tokenizer = MarianTokenizer.from_pretrained(model_name)
sample_text = "I am a small frog."
```
```
infer(model_cpu, tokenizer, sample_text)
```
## Padded Model[#](#Padded-Model "Permalink to this headline")
In order to perform inference on Neuron, the model must be changed in a way that it supports tracing and fixed-sized inputs. One way in which this is possible is to use a pad the model inputs to the maximum possible tensor sizes. The benefit of using a padded model is that it supports variable length text generation up to a specified length `max_decoder_length`. A consequence of padding is that it can negatively impact performance due to large data transfers.
### PaddedEncoder & PaddedDecoder Modules[#](#PaddedEncoder-&-PaddedDecoder-Modules "Permalink to this headline")
Here you will define wrappers around the encoder and decoder portions of the generation model that are compatible with `torch.jit.trace` as well as fixed-sized inputs.
The following are important features which are distinct from the default configuration:
1. Disabled `return_dict`. When this is enabled, the network uses `dataclass` type outputs which are not compatible with `torch.jit.trace`.
2. Disabled `use_cache`. When this option is enabled, the network expects a collection of cache tensors which grow upon each iteration. Since Neuron requires fixed sized inputs, this must be disabled.
3. The `GenerationMixin:beam_search` implementation uses only the logits for the current iteration index from the original decoder layer output. Since inputs must be padded, performance can be improved by selecting only a subset of the hidden state prior to the final linear layer. For efficiency on Neuron, this reduction uses an elementwise-multiply to mask out the unused hidden values and then sums along an axis.
4. Since a reduction step is insterted between the decoder output and the final logit calculation, the original `model` attribute is not used. Instead the `PaddedDecoder` class combines the decoder, reducer, and linear layers into a combined forward pass. In the original model there is a clear distinction between the decoder layer and the final linear layer. These layers are fused together to get one large fully optimized graph.
```
import torch
from torch.nn import functional as F
class PaddedEncoder(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.encoder = model.model.encoder
self.main_input_name = 'input_ids'
def forward(self, input_ids, attention_mask):
return self.encoder(input_ids, attention_mask=attention_mask, return_dict=False)
class PaddedDecoder(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.weight = model.model.shared.weight.clone().detach()
self.bias = model.final_logits_bias.clone().detach()
self.decoder = model.model.decoder
def forward(self, input_ids, attention_mask, encoder_outputs, index):
# Invoke the decoder
hidden, = self.decoder(
input_ids=input_ids,
encoder_hidden_states=encoder_outputs,
encoder_attention_mask=attention_mask,
return_dict=False,
use_cache=False,
)
_, n_length, _ = hidden.shape
# Create selection mask
mask = torch.arange(n_length, dtype=torch.float32) == index
mask = mask.view(1, -1, 1)
# Broadcast mask
masked = torch.multiply(hidden, mask)
# Reduce along 1st dimension
hidden = torch.sum(masked, 1, keepdims=True)
# Compute final linear layer for token probabilities
logits = F.linear(
hidden,
self.weight,
bias=self.bias
)
return logits
```
### PaddedGenerator - GenerationMixin Class[#](#PaddedGenerator---GenerationMixin-Class "Permalink to this headline")
On text generation tasks, HuggingFace Transformers defines a [GenerationMixin](https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin) base class which provides standard methods and algorithms to generate text. For this tutorial, you will be using the beam search algorithm on encoder/decoder architectures.
To be able to use these methods, you will be defining your own class derived from the GenerationMixin class to run a beam search. This will invoke the encoder and decoder layers in a way that is compatible with fixed sized inputs and traced modules. This means you must import the base class and the output objects ([Seq2SeqLMOutput](https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.Seq2SeqLMOutput), [BaseModelOutput](https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.BaseModelOutput)) used by the [beam\_search](https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.beam_search) algorithm.
The `GenerationMixin:generate` method will use `GenerationMixin:beam_search` which requires that you to define your own class implementation that invokes the `PaddedEncoder` and `PaddedDecoder` modules using padded inputs. The standard generator model implementation will not work by default because it is intended to infer with variable-sized (growing) input tensors.
The `from_model` method is defined to create the `PaddedGenerator` from an existing pretrained generator class.
To invoke the Encoder and Decoder traced modules in a way that is compatible with the `GenerationMixin:beam_search` implementation, the `get_encoder`, `__call__`, and `prepare_inputs_for_generation` methods are overriden.
Lastly, the class defines methods for serialization so that the model can be easily saved and loaded.
```
import os
from transformers import GenerationMixin, AutoConfig
from transformers.modeling_outputs import Seq2SeqLMOutput, BaseModelOutput
from transformers.modeling_utils import PreTrainedModel
class PaddedGenerator(PreTrainedModel, GenerationMixin):
@classmethod
def from_model(cls, model):
generator = cls(model.config)
generator.encoder = PaddedEncoder(model)
generator.decoder = PaddedDecoder(model)
return generator
def prepare_inputs_for_generation(
self,
input_ids,
encoder_outputs=None,
attention_mask=None,
**kwargs,
):
# Pad the inputs for Neuron
current_length = input_ids.shape[1]
pad_size = self.config.max_length - current_length
return dict(
input_ids=F.pad(input_ids, (0, pad_size)),
attention_mask=attention_mask,
encoder_outputs=encoder_outputs.last_hidden_state,
current_length=torch.tensor(current_length - 1),
)
def get_encoder(self):
def encode(input_ids, attention_mask, **kwargs):
output, = self.encoder(input_ids, attention_mask)
return BaseModelOutput(
last_hidden_state=output,
)
return encode
def forward(self, input_ids, attention_mask, encoder_outputs, current_length, **kwargs):
logits = self.decoder(input_ids, attention_mask, encoder_outputs, current_length)
return Seq2SeqLMOutput(logits=logits)
@property
def device(self): # Attribute required by beam search
return torch.device('cpu')
def save_pretrained(self, directory):
if os.path.isfile(directory):
print(f"Provided path ({directory}) should be a directory, not a file")
return
os.makedirs(directory, exist_ok=True)
torch.jit.save(self.encoder, os.path.join(directory, 'encoder.pt'))
torch.jit.save(self.decoder, os.path.join(directory, 'decoder.pt'))
self.config.save_pretrained(directory)
@classmethod
def from_pretrained(cls, directory):
config = AutoConfig.from_pretrained(directory)
obj = cls(config)
obj.encoder = torch.jit.load(os.path.join(directory, 'encoder.pt'))
obj.decoder = torch.jit.load(os.path.join(directory, 'decoder.pt'))
setattr(obj.encoder, 'main_input_name', 'input_ids') # Attribute required by beam search
return obj
```
### Padded CPU Inference[#](#Padded-CPU-Inference "Permalink to this headline")
To start, it is important to ensure that the transformations we have made to the model were successful. Using the classes defined above we can test that the padded model execution on CPU is identical to the original output also running on CPU.
```
padded_model_cpu = PaddedGenerator.from_model(model_cpu)
infer(padded_model_cpu, tokenizer, sample_text)
```
### Padded Neuron Tracing & Inference[#](#Padded-Neuron-Tracing-&-Inference "Permalink to this headline")
Now that the padded version of model is confirmed to produce the same outputs as the non-padded version, the model can be compiled for Neuron.
```
import torch
import torch_neuron
def trace(model, num_texts, num_beams, max_decoder_length, max_encoder_length):
"""
Traces the encoder and decoder modules for use on Neuron.
This function fixes the network to the given sizes. Once the model has been
compiled to a given size, the inputs to these networks must always be of
fixed size.
Args:
model (PaddedGenerator): The padded generator to compile for Neuron
num_texts (int): The number of input texts to translate at once
num_beams (int): The number of beams to compute per text
max_decoder_length (int): The maximum number of tokens to be generated
max_encoder_length (int): The maximum number of input tokens that will be encoded
"""
# Trace the encoder
inputs = (
torch.ones((num_texts, max_encoder_length), dtype=torch.long),
torch.ones((num_texts, max_encoder_length), dtype=torch.long),
)
encoder = torch_neuron.trace(model.encoder, inputs)
# Trace the decoder (with expanded inputs)
batch_size = num_texts * num_beams
inputs = (
torch.ones((batch_size, max_decoder_length), dtype=torch.long),
torch.ones((batch_size, max_encoder_length), dtype=torch.long),
torch.ones((batch_size, max_encoder_length, model.config.d_model), dtype=torch.float),
torch.tensor(0),
)
decoder = torch_neuron.trace(model.decoder, inputs)
traced = PaddedGenerator(model.config)
traced.encoder = encoder
traced.decoder = decoder
setattr(encoder, 'main_input_name', 'input_ids') # Attribute required by beam search
return traced
```
```
padded_model_neuron = trace(padded_model_cpu, num_texts, num_beams, max_decoder_length, max_encoder_length)
```
Comparing the Neuron execution to the original CPU implementation, you will see the exact same generated text.
```
# CPU execution for comparison
infer(padded_model_neuron, tokenizer, sample_text)
```
### Padded Neuron Serialization[#](#Padded-Neuron-Serialization "Permalink to this headline")
Finally, we can test that we can serialize and reload the model so that it can be used later in its precompiled format.
```
padded_model_neuron.save_pretrained('NeuronPaddedMarianMT')
padded_model_loaded = PaddedGenerator.from_pretrained('NeuronPaddedMarianMT')
infer(padded_model_loaded, tokenizer, sample_text)
```
## Greedy Unrolled Model[#](#Greedy-Unrolled-Model "Permalink to this headline")
An unrolled version of the model can achieve better performance in some cases since all operations will be executed on the Neuron hardware without returning to CPU. The consequence of this type of model is that since the generation loop execution never returns to CPU, the entire sequence up to `max_decoder_length` is performed in a single forward pass.
The following module performs greedy text generation. Unlike the original beam search text generation, this implementation always selects the most probable token and does not generate multiple result texts.
### GreedyUnrolledGenerator Module[#](#GreedyUnrolledGenerator-Module "Permalink to this headline")
```
class GreedyUnrolledGenerator(torch.nn.Module):
def __init__(self, model):
super().__init__()
self.config = model.config
self.model = model
def forward(self, input_ids, attention_mask):
# Generate the encoder state for the input tokens. This is only done once and the state is reused.
encoder_outputs, = self.model.model.encoder(input_ids, attention_mask=attention_mask, return_dict=False)
# Set the intial state for the decode loop. This will grow per decoder iteration
tokens = torch.full((input_ids.size(0), 2), self.config.decoder_start_token_id)
# Iteratively invoke the decoder on incrementally generated `tokens` to generate a `next_token`.
# Note that unlike the GeneratorMixin.generate function, there is no early-exit if the stop token
# has been reached. This will always run a fixed number of iterations.
for i in range(self.config.max_length):
hidden, = self.model.model.decoder(
input_ids=tokens,
encoder_hidden_states=encoder_outputs,
encoder_attention_mask=attention_mask,
return_dict=False,
use_cache=False,
) # size: [batch, current_length, vocab_size]
logits = F.linear(
hidden[:, -1, :],
self.model.model.shared.weight,
bias=self.model.final_logits_bias
)
next_tokens = torch.argmax(logits, dim=1, keepdims=True)
tokens = torch.cat([tokens, next_tokens], dim=1)
return tokens
```
### Greedy CPU Inference[#](#Greedy-CPU-Inference "Permalink to this headline")
The inference code must be updated since the `generate` method is no longer used. This is because the entire generative inference loop occurs within the `GreedyUnrolledGenerator.forward` method.
```
def infer_greedy(model, tokenizer, text):
batch = tokenizer(text, max_length=max_decoder_length, truncation=True, padding='max_length', return_tensors="pt")
inputs = batch['input_ids'], batch['attention_mask']
tokens = greedy_cpu(*inputs)
print('Texts:')
for i, t in enumerate(tokens):
result = tokenizer.decode(t, skip_special_tokens=True)
print(i + 1, result)
```
Like in previous section of this tutorial, first the greedy model is executed on CPU to validate that the correct results were produced. In this example, the generated text matches the first result of the original beam search.
```
model_cpu.config.max_length = 8 # This controls the number of decoder loops. Reduced to improve compilation speed.
greedy_cpu = GreedyUnrolledGenerator(model_cpu)
infer_greedy(greedy_cpu, tokenizer, sample_text)
```
### Greedy Neuron Tracing & Inference[#](#Greedy-Neuron-Tracing-&-Inference "Permalink to this headline")
Similarly the tracing is simplified since the now the `GreedyUnrolledGenerator.forward` can be compiled as a single unit.
For compilation efficiency, two changes will be made compared to normal compilaition: - `torch.jit.freeze` is used because it can _sometimes_ speed up compilation by in the case where a module is re-used multiple times. In this case, it is more efficient because the `self.model.model.decoder` is used in a loop. - The `torch_neuron.trace` option `fallback` is set to `False`. This forces all operations to execute on Neuron. Most of the time this is not recommended or efficient. In this case, it is more efficient because it means a single subgraph is produced rather than many. Usually one subgraph would be produced per decoder iteration since `aten::embedding` is executed in a loop. The `aten::embedding` operation is otherwise exected on CPU by default since this is usually more efficient than executing on Neuron.
You may notice that compilation will take significantly longer with the unrolled model since the model inserts new operations into the compute graph for every single decoder iteration. This creates a much larger model graph even though the weights are re-used.
```
example = (
torch.ones((num_texts, max_encoder_length), dtype=torch.long),
torch.ones((num_texts, max_encoder_length), dtype=torch.long),
)
greedy_cpu.eval()
greedy_trace = torch.jit.trace(greedy_cpu, example)
greedy_frozen = torch.jit.freeze(greedy_trace)
greedy_neuron = torch_neuron.trace(greedy_frozen, example, fallback=False)
```
```
infer_greedy(greedy_neuron, tokenizer, sample_text)
```
### Greedy Neuron Serialization[#](#Greedy-Neuron-Serialization "Permalink to this headline")
Unlike the previous version of the model that used the `GenerationMixin` base class. This greedy version of the model can be serialized using the regular `torch.jit.save` and `torch.jit.load` utilities since it is a pure torchscript module.
```
torch.jit.save(greedy_neuron, 'greedy_neuron.pt')
loaded_greedy_neuron = torch.jit.load('greedy_neuron.pt')
infer_greedy(loaded_greedy_neuron, tokenizer, sample_text)
```
## Appendix[#](#Appendix "Permalink to this headline")
### BART (Mask Filling Task)[#](#BART-(Mask-Filling-Task) "Permalink to this headline")
These `PaddedGenerator` class can be applied to the BART model for the task of filling in mask tokens.
```
from transformers import BartForConditionalGeneration, BartTokenizer
bart_name = "facebook/bart-large"
bart_model = BartForConditionalGeneration.from_pretrained(bart_name)
bart_model.config.max_length = max_decoder_length
bart_tokenizer = BartTokenizer.from_pretrained(bart_name)
bart_text = "UN Chief Says There Is No <mask> in Syria"
```
```
# CPU Execution
infer(bart_model, bart_tokenizer, bart_text)
```
```
# Neuron Execution
paddded_bart = PaddedGenerator.from_model(bart_model)
bart_neuron = trace(paddded_bart, num_texts, num_beams, max_decoder_length, max_encoder_length)
infer(bart_neuron, bart_tokenizer, bart_text)
```
### Pegasus (Summarization Task)[#](#Pegasus-(Summarization-Task) "Permalink to this headline")
These `PaddedGenerator` class can be applied to the Pegasus model for summarization.
```
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
pegasus_name = 'google/pegasus-xsum'
pegasus_model = PegasusForConditionalGeneration.from_pretrained(pegasus_name)
pegasus_model.config.max_length = max_decoder_length
pegasus_tokenizer = PegasusTokenizer.from_pretrained(pegasus_name)
pegasus_text = "PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires."
```
```
# CPU Execution
infer(pegasus_model, pegasus_tokenizer, pegasus_text)
```
```
# Neuron Execution
paddded_pegasus = PaddedGenerator.from_model(pegasus_model)
pegasus_neuron = trace(paddded_pegasus, num_texts, num_beams, max_decoder_length, max_encoder_length)
infer(pegasus_neuron, pegasus_tokenizer, pegasus_text)
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Transformers MarianMT Tutorial — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<link rel="next" title="Utilizing Neuron Capabilities Tutorials" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
<link rel="prev" title="BERT TorchServe Tutorial" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/pytorch/transformers-marianmt", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/pytorch/transformers-marianmt.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/pytorch/transformers-marianmt.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/src/examples/pytorch/transformers-marianmt.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies:">
Install Dependencies:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Parameters">
Parameters
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#CPU-Model-Inference">
CPU Model Inference
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-Model">
Padded Model
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#PaddedEncoder-&-PaddedDecoder-Modules">
PaddedEncoder & PaddedDecoder Modules
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#PaddedGenerator---GenerationMixin-Class">
PaddedGenerator - GenerationMixin Class
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-CPU-Inference">
Padded CPU Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-Neuron-Tracing-&-Inference">
Padded Neuron Tracing & Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-Neuron-Serialization">
Padded Neuron Serialization
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-Unrolled-Model">
Greedy Unrolled Model
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#GreedyUnrolledGenerator-Module">
GreedyUnrolledGenerator Module
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-CPU-Inference">
Greedy CPU Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-Neuron-Tracing-&-Inference">
Greedy Neuron Tracing & Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-Neuron-Serialization">
Greedy Neuron Serialization
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Appendix">
Appendix
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#BART-(Mask-Filling-Task)">
BART (Mask Filling Task)
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Pegasus-(Summarization-Task)">
Pegasus (Summarization Task)
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Transformers MarianMT Tutorial</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies:">
Install Dependencies:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Parameters">
Parameters
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#CPU-Model-Inference">
CPU Model Inference
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-Model">
Padded Model
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#PaddedEncoder-&-PaddedDecoder-Modules">
PaddedEncoder & PaddedDecoder Modules
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#PaddedGenerator---GenerationMixin-Class">
PaddedGenerator - GenerationMixin Class
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-CPU-Inference">
Padded CPU Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-Neuron-Tracing-&-Inference">
Padded Neuron Tracing & Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Padded-Neuron-Serialization">
Padded Neuron Serialization
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-Unrolled-Model">
Greedy Unrolled Model
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#GreedyUnrolledGenerator-Module">
GreedyUnrolledGenerator Module
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-CPU-Inference">
Greedy CPU Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-Neuron-Tracing-&-Inference">
Greedy Neuron Tracing & Inference
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Greedy-Neuron-Serialization">
Greedy Neuron Serialization
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Appendix">
Appendix
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#BART-(Mask-Filling-Task)">
BART (Mask Filling Task)
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Pegasus-(Summarization-Task)">
Pegasus (Summarization Task)
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Transformers-MarianMT-Tutorial">
<h1>Transformers MarianMT Tutorial<a class="headerlink" href="#Transformers-MarianMT-Tutorial" title="Permalink to this headline">#</a></h1>
<p>In this tutorial, you will deploy the <a class="reference external" href="https://huggingface.co/transformers/v4.0.1/model_doc/marian.html">HuggingFace MarianMT</a> model for text translation.</p>
<p>This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.</p>
<p>Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the <a class="reference external" href="../../../frameworks/torch/torch-neuron/setup/pytorch-install.html">PyTorch Installation Guide</a>. You can select the kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.</p>
<p>To generate text, you will be using the beam search algorithm to incrementally generate token candidates until the full output text has been created. Unlike simple single-pass models, this algorithm divides the work into two distinct phases:</p>
<ul class="simple">
<li><p><strong>Encoder</strong>: Convert the input text into an encoded representation. (Executed once)</p></li>
<li><p><strong>Decoder</strong>: Use the encoded representation of the input text and the current output tokens to incrementally generate the set of next best candidate tokens. (Executed many times)</p></li>
</ul>
<p>In this tutorial you will perform the following steps:</p>
<ul class="simple">
<li><p><strong>Compile</strong>: Compile both the Encoder and Decoder for Neuron using simplified interfaces for inference.</p></li>
<li><p><strong>Infer</strong>: Run on CPU and Neuron and compare results.</p></li>
</ul>
<p>Finally, a completely unrolled decoder will be built which simplifies the implementation at the cost of performing fixed-length inferences.</p>
<div class="section" id="Install-Dependencies:">
<h2>Install Dependencies:<a class="headerlink" href="#Install-Dependencies:" title="Permalink to this headline">#</a></h2>
<p>This tutorial has the following dependencies:</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">transformers==4.25.1</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">sentencepiece</span></code></p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">neuron-cc[tensorflow]</span></code></p></li>
</ul>
<p>The following will install the required <code class="docutils literal notranslate"><span class="pre">transformers</span></code> version. Note that encoder/decoder API changes across different minor versions requires that you are specific about the version used. Also note that the <code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code> version is pinned due to <code class="docutils literal notranslate"><span class="pre">transformer</span></code> compatibility issues.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span>sentencepiece<span class="w"> </span><span class="nv">transformers</span><span class="o">==</span><span class="m">4</span>.26.1
</pre></div>
</div>
</div>
</div>
<div class="section" id="Parameters">
<h2>Parameters<a class="headerlink" href="#Parameters" title="Permalink to this headline">#</a></h2>
<p>The parameters of a generative model can be tuned for different use-cases. In this example, you’ll tailor the parameters to a single inference beam search for an on-demand inference use-case. See the <a class="reference external" href="https://huggingface.co/transformers/v4.0.1/model_doc/marian.html#marianconfig">MarianConfig</a> for parameter details.</p>
<p>Rather than varying the encoder/decoder token sizes at runtime, you must define these parameters prior to compilation. The encoder/decoder token sizes are important tunable parameters as a large token sequence will offer greater sentence length flexibility but perform worse than a small token sequence.</p>
<p>To maximize performance on Neuron, the <code class="docutils literal notranslate"><span class="pre">num_beams</span></code>, <code class="docutils literal notranslate"><span class="pre">max_encode_length</span></code> and <code class="docutils literal notranslate"><span class="pre">max_decoder_length</span></code> should be made as small as possible for the use-case.</p>
<p>For this tutorial you will use a model that translates sentences of up to 32 token from English to German.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%</span><span class="k">env</span> TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
<span class="n">model_name</span> <span class="o">=</span> <span class="s2">"Helsinki-NLP/opus-mt-en-de"</span> <span class="c1"># English -> German model</span>
<span class="n">num_texts</span> <span class="o">=</span> <span class="mi">1</span> <span class="c1"># Number of input texts to decode</span>
<span class="n">num_beams</span> <span class="o">=</span> <span class="mi">4</span> <span class="c1"># Number of beams per input text</span>
<span class="n">max_encoder_length</span> <span class="o">=</span> <span class="mi">32</span> <span class="c1"># Maximum input token length</span>
<span class="n">max_decoder_length</span> <span class="o">=</span> <span class="mi">32</span> <span class="c1"># Maximum output token length</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="CPU-Model-Inference">
<h2>CPU Model Inference<a class="headerlink" href="#CPU-Model-Inference" title="Permalink to this headline">#</a></h2>
<p>Start by executing the model on CPU to test its execution.</p>
<p>The following defines the inference function which will be used to compare the Neuron and CPU output. In this example you will display all beam search sequences that were generated. For a real on-demand use case, set the <code class="docutils literal notranslate"><span class="pre">num_beams</span></code> to <code class="docutils literal notranslate"><span class="pre">1</span></code> to return only the top result.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">infer</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">text</span><span class="p">):</span>
<span class="c1"># Truncate and pad the max length to ensure that the token size is compatible with fixed-sized encoder (Not necessary for pure CPU execution)</span>
<span class="n">batch</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="p">(</span><span class="n">text</span><span class="p">,</span> <span class="n">max_length</span><span class="o">=</span><span class="n">max_decoder_length</span><span class="p">,</span> <span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span> <span class="n">return_tensors</span><span class="o">=</span><span class="s2">"pt"</span><span class="p">)</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">generate</span><span class="p">(</span><span class="o">**</span><span class="n">batch</span><span class="p">,</span> <span class="n">max_length</span><span class="o">=</span><span class="n">max_decoder_length</span><span class="p">,</span> <span class="n">num_beams</span><span class="o">=</span><span class="n">num_beams</span><span class="p">,</span> <span class="n">num_return_sequences</span><span class="o">=</span><span class="n">num_beams</span><span class="p">)</span>
<span class="n">results</span> <span class="o">=</span> <span class="p">[</span><span class="n">tokenizer</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">skip_special_tokens</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">output</span><span class="p">]</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Texts:'</span><span class="p">)</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">summary</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">results</span><span class="p">):</span>
<span class="nb">print</span><span class="p">(</span><span class="n">i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">,</span> <span class="n">summary</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Note that after loading the model, we also set the maximum length. This will later be used to limit the size of the compiled model.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">MarianMTModel</span><span class="p">,</span> <span class="n">MarianTokenizer</span>
<span class="n">model_cpu</span> <span class="o">=</span> <span class="n">MarianMTModel</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">model_name</span><span class="p">)</span>
<span class="n">model_cpu</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">max_length</span> <span class="o">=</span> <span class="n">max_decoder_length</span>
<span class="n">model_cpu</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
<span class="n">tokenizer</span> <span class="o">=</span> <span class="n">MarianTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">model_name</span><span class="p">)</span>
<span class="n">sample_text</span> <span class="o">=</span> <span class="s2">"I am a small frog."</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">infer</span><span class="p">(</span><span class="n">model_cpu</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">sample_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Padded-Model">
<h2>Padded Model<a class="headerlink" href="#Padded-Model" title="Permalink to this headline">#</a></h2>
<p>In order to perform inference on Neuron, the model must be changed in a way that it supports tracing and fixed-sized inputs. One way in which this is possible is to use a pad the model inputs to the maximum possible tensor sizes. The benefit of using a padded model is that it supports variable length text generation up to a specified length <code class="docutils literal notranslate"><span class="pre">max_decoder_length</span></code>. A consequence of padding is that it can negatively impact performance due to large data transfers.</p>
<div class="section" id="PaddedEncoder-&-PaddedDecoder-Modules">
<h3>PaddedEncoder & PaddedDecoder Modules<a class="headerlink" href="#PaddedEncoder-&-PaddedDecoder-Modules" title="Permalink to this headline">#</a></h3>
<p>Here you will define wrappers around the encoder and decoder portions of the generation model that are compatible with <code class="docutils literal notranslate"><span class="pre">torch.jit.trace</span></code> as well as fixed-sized inputs.</p>
<p>The following are important features which are distinct from the default configuration:</p>
<ol class="arabic simple">
<li><p>Disabled <code class="docutils literal notranslate"><span class="pre">return_dict</span></code>. When this is enabled, the network uses <code class="docutils literal notranslate"><span class="pre">dataclass</span></code> type outputs which are not compatible with <code class="docutils literal notranslate"><span class="pre">torch.jit.trace</span></code>.</p></li>
<li><p>Disabled <code class="docutils literal notranslate"><span class="pre">use_cache</span></code>. When this option is enabled, the network expects a collection of cache tensors which grow upon each iteration. Since Neuron requires fixed sized inputs, this must be disabled.</p></li>
<li><p>The <code class="docutils literal notranslate"><span class="pre">GenerationMixin:beam_search</span></code> implementation uses only the logits for the current iteration index from the original decoder layer output. Since inputs must be padded, performance can be improved by selecting only a subset of the hidden state prior to the final linear layer. For efficiency on Neuron, this reduction uses an elementwise-multiply to mask out the unused hidden values and then sums along an axis.</p></li>
<li><p>Since a reduction step is insterted between the decoder output and the final logit calculation, the original <code class="docutils literal notranslate"><span class="pre">model</span></code> attribute is not used. Instead the <code class="docutils literal notranslate"><span class="pre">PaddedDecoder</span></code> class combines the decoder, reducer, and linear layers into a combined forward pass. In the original model there is a clear distinction between the decoder layer and the final linear layer. These layers are fused together to get one large fully optimized graph.</p></li>
</ol>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">from</span> <span class="nn">torch.nn</span> <span class="kn">import</span> <span class="n">functional</span> <span class="k">as</span> <span class="n">F</span>
<span class="k">class</span> <span class="nc">PaddedEncoder</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">model</span><span class="p">):</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">encoder</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">encoder</span>
<span class="bp">self</span><span class="o">.</span><span class="n">main_input_name</span> <span class="o">=</span> <span class="s1">'input_ids'</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">):</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">encoder</span><span class="p">(</span><span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="o">=</span><span class="n">attention_mask</span><span class="p">,</span> <span class="n">return_dict</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="k">class</span> <span class="nc">PaddedDecoder</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">model</span><span class="p">):</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">weight</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">shared</span><span class="o">.</span><span class="n">weight</span><span class="o">.</span><span class="n">clone</span><span class="p">()</span><span class="o">.</span><span class="n">detach</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">bias</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">final_logits_bias</span><span class="o">.</span><span class="n">clone</span><span class="p">()</span><span class="o">.</span><span class="n">detach</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">decoder</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">decoder</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">,</span> <span class="n">encoder_outputs</span><span class="p">,</span> <span class="n">index</span><span class="p">):</span>
<span class="c1"># Invoke the decoder</span>
<span class="n">hidden</span><span class="p">,</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">decoder</span><span class="p">(</span>
<span class="n">input_ids</span><span class="o">=</span><span class="n">input_ids</span><span class="p">,</span>
<span class="n">encoder_hidden_states</span><span class="o">=</span><span class="n">encoder_outputs</span><span class="p">,</span>
<span class="n">encoder_attention_mask</span><span class="o">=</span><span class="n">attention_mask</span><span class="p">,</span>
<span class="n">return_dict</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="n">use_cache</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="p">)</span>
<span class="n">_</span><span class="p">,</span> <span class="n">n_length</span><span class="p">,</span> <span class="n">_</span> <span class="o">=</span> <span class="n">hidden</span><span class="o">.</span><span class="n">shape</span>
<span class="c1"># Create selection mask</span>
<span class="n">mask</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">arange</span><span class="p">(</span><span class="n">n_length</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span> <span class="o">==</span> <span class="n">index</span>
<span class="n">mask</span> <span class="o">=</span> <span class="n">mask</span><span class="o">.</span><span class="n">view</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">)</span>
<span class="c1"># Broadcast mask</span>
<span class="n">masked</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">multiply</span><span class="p">(</span><span class="n">hidden</span><span class="p">,</span> <span class="n">mask</span><span class="p">)</span>
<span class="c1"># Reduce along 1st dimension</span>
<span class="n">hidden</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">sum</span><span class="p">(</span><span class="n">masked</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="n">keepdims</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="c1"># Compute final linear layer for token probabilities</span>
<span class="n">logits</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">linear</span><span class="p">(</span>
<span class="n">hidden</span><span class="p">,</span>
<span class="bp">self</span><span class="o">.</span><span class="n">weight</span><span class="p">,</span>
<span class="n">bias</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">bias</span>
<span class="p">)</span>
<span class="k">return</span> <span class="n">logits</span>
<br></pre></div>
</div>
</div>
</div>
<div class="section" id="PaddedGenerator---GenerationMixin-Class">
<h3>PaddedGenerator - GenerationMixin Class<a class="headerlink" href="#PaddedGenerator---GenerationMixin-Class" title="Permalink to this headline">#</a></h3>
<p>On text generation tasks, HuggingFace Transformers defines a <a class="reference external" href="https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin">GenerationMixin</a> base class which provides standard methods and algorithms to generate text. For this tutorial, you will be using the beam search algorithm on encoder/decoder architectures.</p>
<p>To be able to use these methods, you will be defining your own class derived from the GenerationMixin class to run a beam search. This will invoke the encoder and decoder layers in a way that is compatible with fixed sized inputs and traced modules. This means you must import the base class and the output objects (<a class="reference external" href="https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.Seq2SeqLMOutput">Seq2SeqLMOutput</a>,
<a class="reference external" href="https://huggingface.co/transformers/v4.0.1/main_classes/output.html#transformers.modeling_outputs.BaseModelOutput">BaseModelOutput</a>) used by the <a class="reference external" href="https://huggingface.co/transformers/v4.0.1/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.beam_search">beam_search</a> algorithm.</p>
<p>The <code class="docutils literal notranslate"><span class="pre">GenerationMixin:generate</span></code> method will use <code class="docutils literal notranslate"><span class="pre">GenerationMixin:beam_search</span></code> which requires that you to define your own class implementation that invokes the <code class="docutils literal notranslate"><span class="pre">PaddedEncoder</span></code> and <code class="docutils literal notranslate"><span class="pre">PaddedDecoder</span></code> modules using padded inputs. The standard generator model implementation will not work by default because it is intended to infer with variable-sized (growing) input tensors.</p>
<p>The <code class="docutils literal notranslate"><span class="pre">from_model</span></code> method is defined to create the <code class="docutils literal notranslate"><span class="pre">PaddedGenerator</span></code> from an existing pretrained generator class.</p>
<p>To invoke the Encoder and Decoder traced modules in a way that is compatible with the <code class="docutils literal notranslate"><span class="pre">GenerationMixin:beam_search</span></code> implementation, the <code class="docutils literal notranslate"><span class="pre">get_encoder</span></code>, <code class="docutils literal notranslate"><span class="pre">__call__</span></code>, and <code class="docutils literal notranslate"><span class="pre">prepare_inputs_for_generation</span></code> methods are overriden.</p>
<p>Lastly, the class defines methods for serialization so that the model can be easily saved and loaded.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">GenerationMixin</span><span class="p">,</span> <span class="n">AutoConfig</span>
<span class="kn">from</span> <span class="nn">transformers.modeling_outputs</span> <span class="kn">import</span> <span class="n">Seq2SeqLMOutput</span><span class="p">,</span> <span class="n">BaseModelOutput</span>
<span class="kn">from</span> <span class="nn">transformers.modeling_utils</span> <span class="kn">import</span> <span class="n">PreTrainedModel</span>
<span class="k">class</span> <span class="nc">PaddedGenerator</span><span class="p">(</span><span class="n">PreTrainedModel</span><span class="p">,</span> <span class="n">GenerationMixin</span><span class="p">):</span>
<span class="nd">@classmethod</span>
<span class="k">def</span> <span class="nf">from_model</span><span class="p">(</span><span class="bp">cls</span><span class="p">,</span> <span class="n">model</span><span class="p">):</span>
<span class="n">generator</span> <span class="o">=</span> <span class="bp">cls</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">config</span><span class="p">)</span>
<span class="n">generator</span><span class="o">.</span><span class="n">encoder</span> <span class="o">=</span> <span class="n">PaddedEncoder</span><span class="p">(</span><span class="n">model</span><span class="p">)</span>
<span class="n">generator</span><span class="o">.</span><span class="n">decoder</span> <span class="o">=</span> <span class="n">PaddedDecoder</span><span class="p">(</span><span class="n">model</span><span class="p">)</span>
<span class="k">return</span> <span class="n">generator</span>
<span class="k">def</span> <span class="nf">prepare_inputs_for_generation</span><span class="p">(</span>
<span class="bp">self</span><span class="p">,</span>
<span class="n">input_ids</span><span class="p">,</span>
<span class="n">encoder_outputs</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">attention_mask</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="o">**</span><span class="n">kwargs</span><span class="p">,</span>
<span class="p">):</span>
<span class="c1"># Pad the inputs for Neuron</span>
<span class="n">current_length</span> <span class="o">=</span> <span class="n">input_ids</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">pad_size</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">max_length</span> <span class="o">-</span> <span class="n">current_length</span>
<span class="k">return</span> <span class="nb">dict</span><span class="p">(</span>
<span class="n">input_ids</span><span class="o">=</span><span class="n">F</span><span class="o">.</span><span class="n">pad</span><span class="p">(</span><span class="n">input_ids</span><span class="p">,</span> <span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">pad_size</span><span class="p">)),</span>
<span class="n">attention_mask</span><span class="o">=</span><span class="n">attention_mask</span><span class="p">,</span>
<span class="n">encoder_outputs</span><span class="o">=</span><span class="n">encoder_outputs</span><span class="o">.</span><span class="n">last_hidden_state</span><span class="p">,</span>
<span class="n">current_length</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">tensor</span><span class="p">(</span><span class="n">current_length</span> <span class="o">-</span> <span class="mi">1</span><span class="p">),</span>
<span class="p">)</span>
<span class="k">def</span> <span class="nf">get_encoder</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">def</span> <span class="nf">encode</span><span class="p">(</span><span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="n">output</span><span class="p">,</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">encoder</span><span class="p">(</span><span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">)</span>
<span class="k">return</span> <span class="n">BaseModelOutput</span><span class="p">(</span>
<span class="n">last_hidden_state</span><span class="o">=</span><span class="n">output</span><span class="p">,</span>
<span class="p">)</span>
<span class="k">return</span> <span class="n">encode</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">,</span> <span class="n">encoder_outputs</span><span class="p">,</span> <span class="n">current_length</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="n">logits</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">decoder</span><span class="p">(</span><span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">,</span> <span class="n">encoder_outputs</span><span class="p">,</span> <span class="n">current_length</span><span class="p">)</span>
<span class="k">return</span> <span class="n">Seq2SeqLMOutput</span><span class="p">(</span><span class="n">logits</span><span class="o">=</span><span class="n">logits</span><span class="p">)</span>
<span class="nd">@property</span>
<span class="k">def</span> <span class="nf">device</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span> <span class="c1"># Attribute required by beam search</span>
<span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">device</span><span class="p">(</span><span class="s1">'cpu'</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">save_pretrained</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">directory</span><span class="p">):</span>
<span class="k">if</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">isfile</span><span class="p">(</span><span class="n">directory</span><span class="p">):</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Provided path (</span><span class="si">{</span><span class="n">directory</span><span class="si">}</span><span class="s2">) should be a directory, not a file"</span><span class="p">)</span>
<span class="k">return</span>
<span class="n">os</span><span class="o">.</span><span class="n">makedirs</span><span class="p">(</span><span class="n">directory</span><span class="p">,</span> <span class="n">exist_ok</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">encoder</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">directory</span><span class="p">,</span> <span class="s1">'encoder.pt'</span><span class="p">))</span>
<span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">decoder</span><span class="p">,</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">directory</span><span class="p">,</span> <span class="s1">'decoder.pt'</span><span class="p">))</span>
<span class="bp">self</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">save_pretrained</span><span class="p">(</span><span class="n">directory</span><span class="p">)</span>
<span class="nd">@classmethod</span>
<span class="k">def</span> <span class="nf">from_pretrained</span><span class="p">(</span><span class="bp">cls</span><span class="p">,</span> <span class="n">directory</span><span class="p">):</span>
<span class="n">config</span> <span class="o">=</span> <span class="n">AutoConfig</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">directory</span><span class="p">)</span>
<span class="n">obj</span> <span class="o">=</span> <span class="bp">cls</span><span class="p">(</span><span class="n">config</span><span class="p">)</span>
<span class="n">obj</span><span class="o">.</span><span class="n">encoder</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">directory</span><span class="p">,</span> <span class="s1">'encoder.pt'</span><span class="p">))</span>
<span class="n">obj</span><span class="o">.</span><span class="n">decoder</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">directory</span><span class="p">,</span> <span class="s1">'decoder.pt'</span><span class="p">))</span>
<span class="nb">setattr</span><span class="p">(</span><span class="n">obj</span><span class="o">.</span><span class="n">encoder</span><span class="p">,</span> <span class="s1">'main_input_name'</span><span class="p">,</span> <span class="s1">'input_ids'</span><span class="p">)</span> <span class="c1"># Attribute required by beam search</span>
<span class="k">return</span> <span class="n">obj</span>
<br></pre></div>
</div>
</div>
</div>
<div class="section" id="Padded-CPU-Inference">
<h3>Padded CPU Inference<a class="headerlink" href="#Padded-CPU-Inference" title="Permalink to this headline">#</a></h3>
<p>To start, it is important to ensure that the transformations we have made to the model were successful. Using the classes defined above we can test that the padded model execution on CPU is identical to the original output also running on CPU.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">padded_model_cpu</span> <span class="o">=</span> <span class="n">PaddedGenerator</span><span class="o">.</span><span class="n">from_model</span><span class="p">(</span><span class="n">model_cpu</span><span class="p">)</span>
<span class="n">infer</span><span class="p">(</span><span class="n">padded_model_cpu</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">sample_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Padded-Neuron-Tracing-&-Inference">
<h3>Padded Neuron Tracing & Inference<a class="headerlink" href="#Padded-Neuron-Tracing-&-Inference" title="Permalink to this headline">#</a></h3>
<p>Now that the padded version of model is confirmed to produce the same outputs as the non-padded version, the model can be compiled for Neuron.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_neuron</span>
<span class="k">def</span> <span class="nf">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">num_texts</span><span class="p">,</span> <span class="n">num_beams</span><span class="p">,</span> <span class="n">max_decoder_length</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""</span>
<span class="sd"> Traces the encoder and decoder modules for use on Neuron.</span>
<span class="sd"> This function fixes the network to the given sizes. Once the model has been</span>
<span class="sd"> compiled to a given size, the inputs to these networks must always be of</span>
<span class="sd"> fixed size.</span>
<span class="sd"> Args:</span>
<span class="sd"> model (PaddedGenerator): The padded generator to compile for Neuron</span>
<span class="sd"> num_texts (int): The number of input texts to translate at once</span>
<span class="sd"> num_beams (int): The number of beams to compute per text</span>
<span class="sd"> max_decoder_length (int): The maximum number of tokens to be generated</span>
<span class="sd"> max_encoder_length (int): The maximum number of input tokens that will be encoded</span>
<span class="sd"> """</span>
<span class="c1"># Trace the encoder</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">num_texts</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">long</span><span class="p">),</span>
<span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">num_texts</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">long</span><span class="p">),</span>
<span class="p">)</span>
<span class="n">encoder</span> <span class="o">=</span> <span class="n">torch_neuron</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">encoder</span><span class="p">,</span> <span class="n">inputs</span><span class="p">)</span>
<span class="c1"># Trace the decoder (with expanded inputs)</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="n">num_texts</span> <span class="o">*</span> <span class="n">num_beams</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">max_decoder_length</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">long</span><span class="p">),</span>
<span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">long</span><span class="p">),</span>
<span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">,</span> <span class="n">model</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">d_model</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">float</span><span class="p">),</span>
<span class="n">torch</span><span class="o">.</span><span class="n">tensor</span><span class="p">(</span><span class="mi">0</span><span class="p">),</span>
<span class="p">)</span>
<span class="n">decoder</span> <span class="o">=</span> <span class="n">torch_neuron</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">decoder</span><span class="p">,</span> <span class="n">inputs</span><span class="p">)</span>
<span class="n">traced</span> <span class="o">=</span> <span class="n">PaddedGenerator</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">config</span><span class="p">)</span>
<span class="n">traced</span><span class="o">.</span><span class="n">encoder</span> <span class="o">=</span> <span class="n">encoder</span>
<span class="n">traced</span><span class="o">.</span><span class="n">decoder</span> <span class="o">=</span> <span class="n">decoder</span>
<span class="nb">setattr</span><span class="p">(</span><span class="n">encoder</span><span class="p">,</span> <span class="s1">'main_input_name'</span><span class="p">,</span> <span class="s1">'input_ids'</span><span class="p">)</span> <span class="c1"># Attribute required by beam search</span>
<span class="k">return</span> <span class="n">traced</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">padded_model_neuron</span> <span class="o">=</span> <span class="n">trace</span><span class="p">(</span><span class="n">padded_model_cpu</span><span class="p">,</span> <span class="n">num_texts</span><span class="p">,</span> <span class="n">num_beams</span><span class="p">,</span> <span class="n">max_decoder_length</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Comparing the Neuron execution to the original CPU implementation, you will see the exact same generated text.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># CPU execution for comparison</span>
<span class="n">infer</span><span class="p">(</span><span class="n">padded_model_neuron</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">sample_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Padded-Neuron-Serialization">
<h3>Padded Neuron Serialization<a class="headerlink" href="#Padded-Neuron-Serialization" title="Permalink to this headline">#</a></h3>
<p>Finally, we can test that we can serialize and reload the model so that it can be used later in its precompiled format.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">padded_model_neuron</span><span class="o">.</span><span class="n">save_pretrained</span><span class="p">(</span><span class="s1">'NeuronPaddedMarianMT'</span><span class="p">)</span>
<span class="n">padded_model_loaded</span> <span class="o">=</span> <span class="n">PaddedGenerator</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'NeuronPaddedMarianMT'</span><span class="p">)</span>
<span class="n">infer</span><span class="p">(</span><span class="n">padded_model_loaded</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">sample_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
<div class="section" id="Greedy-Unrolled-Model">
<h2>Greedy Unrolled Model<a class="headerlink" href="#Greedy-Unrolled-Model" title="Permalink to this headline">#</a></h2>
<p>An unrolled version of the model can achieve better performance in some cases since all operations will be executed on the Neuron hardware without returning to CPU. The consequence of this type of model is that since the generation loop execution never returns to CPU, the entire sequence up to <code class="docutils literal notranslate"><span class="pre">max_decoder_length</span></code> is performed in a single forward pass.</p>
<p>The following module performs greedy text generation. Unlike the original beam search text generation, this implementation always selects the most probable token and does not generate multiple result texts.</p>
<div class="section" id="GreedyUnrolledGenerator-Module">
<h3>GreedyUnrolledGenerator Module<a class="headerlink" href="#GreedyUnrolledGenerator-Module" title="Permalink to this headline">#</a></h3>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">GreedyUnrolledGenerator</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">model</span><span class="p">):</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">config</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">config</span>
<span class="bp">self</span><span class="o">.</span><span class="n">model</span> <span class="o">=</span> <span class="n">model</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="p">):</span>
<span class="c1"># Generate the encoder state for the input tokens. This is only done once and the state is reused.</span>
<span class="n">encoder_outputs</span><span class="p">,</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">encoder</span><span class="p">(</span><span class="n">input_ids</span><span class="p">,</span> <span class="n">attention_mask</span><span class="o">=</span><span class="n">attention_mask</span><span class="p">,</span> <span class="n">return_dict</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="c1"># Set the intial state for the decode loop. This will grow per decoder iteration</span>
<span class="n">tokens</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">full</span><span class="p">((</span><span class="n">input_ids</span><span class="o">.</span><span class="n">size</span><span class="p">(</span><span class="mi">0</span><span class="p">),</span> <span class="mi">2</span><span class="p">),</span> <span class="bp">self</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">decoder_start_token_id</span><span class="p">)</span>
<span class="c1"># Iteratively invoke the decoder on incrementally generated `tokens` to generate a `next_token`.</span>
<span class="c1"># Note that unlike the GeneratorMixin.generate function, there is no early-exit if the stop token</span>
<span class="c1"># has been reached. This will always run a fixed number of iterations.</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">max_length</span><span class="p">):</span>
<span class="n">hidden</span><span class="p">,</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">decoder</span><span class="p">(</span>
<span class="n">input_ids</span><span class="o">=</span><span class="n">tokens</span><span class="p">,</span>
<span class="n">encoder_hidden_states</span><span class="o">=</span><span class="n">encoder_outputs</span><span class="p">,</span>
<span class="n">encoder_attention_mask</span><span class="o">=</span><span class="n">attention_mask</span><span class="p">,</span>
<span class="n">return_dict</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="n">use_cache</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="p">)</span> <span class="c1"># size: [batch, current_length, vocab_size]</span>
<span class="n">logits</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">linear</span><span class="p">(</span>
<span class="n">hidden</span><span class="p">[:,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="p">:],</span>
<span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">shared</span><span class="o">.</span><span class="n">weight</span><span class="p">,</span>
<span class="n">bias</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">final_logits_bias</span>
<span class="p">)</span>
<span class="n">next_tokens</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">logits</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">keepdims</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">tokens</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">cat</span><span class="p">([</span><span class="n">tokens</span><span class="p">,</span> <span class="n">next_tokens</span><span class="p">],</span> <span class="n">dim</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="k">return</span> <span class="n">tokens</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Greedy-CPU-Inference">
<h3>Greedy CPU Inference<a class="headerlink" href="#Greedy-CPU-Inference" title="Permalink to this headline">#</a></h3>
<p>The inference code must be updated since the <code class="docutils literal notranslate"><span class="pre">generate</span></code> method is no longer used. This is because the entire generative inference loop occurs within the <code class="docutils literal notranslate"><span class="pre">GreedyUnrolledGenerator.forward</span></code> method.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">infer_greedy</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">text</span><span class="p">):</span>
<span class="n">batch</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="p">(</span><span class="n">text</span><span class="p">,</span> <span class="n">max_length</span><span class="o">=</span><span class="n">max_decoder_length</span><span class="p">,</span> <span class="n">truncation</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span> <span class="n">return_tensors</span><span class="o">=</span><span class="s2">"pt"</span><span class="p">)</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">batch</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span> <span class="n">batch</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]</span>
<span class="n">tokens</span> <span class="o">=</span> <span class="n">greedy_cpu</span><span class="p">(</span><span class="o">*</span><span class="n">inputs</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Texts:'</span><span class="p">)</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">t</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">tokens</span><span class="p">):</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">skip_special_tokens</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">,</span> <span class="n">result</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Like in previous section of this tutorial, first the greedy model is executed on CPU to validate that the correct results were produced. In this example, the generated text matches the first result of the original beam search.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">model_cpu</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">max_length</span> <span class="o">=</span> <span class="mi">8</span> <span class="c1"># This controls the number of decoder loops. Reduced to improve compilation speed.</span>
<span class="n">greedy_cpu</span> <span class="o">=</span> <span class="n">GreedyUnrolledGenerator</span><span class="p">(</span><span class="n">model_cpu</span><span class="p">)</span>
<span class="n">infer_greedy</span><span class="p">(</span><span class="n">greedy_cpu</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">sample_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Greedy-Neuron-Tracing-&-Inference">
<h3>Greedy Neuron Tracing & Inference<a class="headerlink" href="#Greedy-Neuron-Tracing-&-Inference" title="Permalink to this headline">#</a></h3>
<p>Similarly the tracing is simplified since the now the <code class="docutils literal notranslate"><span class="pre">GreedyUnrolledGenerator.forward</span></code> can be compiled as a single unit.</p>
<p>For compilation efficiency, two changes will be made compared to normal compilaition: - <code class="docutils literal notranslate"><span class="pre">torch.jit.freeze</span></code> is used because it can <em>sometimes</em> speed up compilation by in the case where a module is re-used multiple times. In this case, it is more efficient because the <code class="docutils literal notranslate"><span class="pre">self.model.model.decoder</span></code> is used in a loop. - The <code class="docutils literal notranslate"><span class="pre">torch_neuron.trace</span></code> option <code class="docutils literal notranslate"><span class="pre">fallback</span></code> is set to <code class="docutils literal notranslate"><span class="pre">False</span></code>. This forces all operations to execute on Neuron. Most of the time this is not recommended or efficient. In this
case, it is more efficient because it means a single subgraph is produced rather than many. Usually one subgraph would be produced per decoder iteration since <code class="docutils literal notranslate"><span class="pre">aten::embedding</span></code> is executed in a loop. The <code class="docutils literal notranslate"><span class="pre">aten::embedding</span></code> operation is otherwise exected on CPU by default since this is usually more efficient than executing on Neuron.</p>
<p>You may notice that compilation will take significantly longer with the unrolled model since the model inserts new operations into the compute graph for every single decoder iteration. This creates a much larger model graph even though the weights are re-used.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">example</span> <span class="o">=</span> <span class="p">(</span>
<span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">num_texts</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">long</span><span class="p">),</span>
<span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">((</span><span class="n">num_texts</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">),</span> <span class="n">dtype</span><span class="o">=</span><span class="n">torch</span><span class="o">.</span><span class="n">long</span><span class="p">),</span>
<span class="p">)</span>
<span class="n">greedy_cpu</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
<span class="n">greedy_trace</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">greedy_cpu</span><span class="p">,</span> <span class="n">example</span><span class="p">)</span>
<span class="n">greedy_frozen</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">freeze</span><span class="p">(</span><span class="n">greedy_trace</span><span class="p">)</span>
<span class="n">greedy_neuron</span> <span class="o">=</span> <span class="n">torch_neuron</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">greedy_frozen</span><span class="p">,</span> <span class="n">example</span><span class="p">,</span> <span class="n">fallback</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">infer_greedy</span><span class="p">(</span><span class="n">greedy_neuron</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">sample_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Greedy-Neuron-Serialization">
<h3>Greedy Neuron Serialization<a class="headerlink" href="#Greedy-Neuron-Serialization" title="Permalink to this headline">#</a></h3>
<p>Unlike the previous version of the model that used the <code class="docutils literal notranslate"><span class="pre">GenerationMixin</span></code> base class. This greedy version of the model can be serialized using the regular <code class="docutils literal notranslate"><span class="pre">torch.jit.save</span></code> and <code class="docutils literal notranslate"><span class="pre">torch.jit.load</span></code> utilities since it is a pure torchscript module.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">greedy_neuron</span><span class="p">,</span> <span class="s1">'greedy_neuron.pt'</span><span class="p">)</span>
<span class="n">loaded_greedy_neuron</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="s1">'greedy_neuron.pt'</span><span class="p">)</span>
<span class="n">infer_greedy</span><span class="p">(</span><span class="n">loaded_greedy_neuron</span><span class="p">,</span> <span class="n">tokenizer</span><span class="p">,</span> <span class="n">sample_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
<div class="section" id="Appendix">
<h2>Appendix<a class="headerlink" href="#Appendix" title="Permalink to this headline">#</a></h2>
<div class="section" id="BART-(Mask-Filling-Task)">
<h3>BART (Mask Filling Task)<a class="headerlink" href="#BART-(Mask-Filling-Task)" title="Permalink to this headline">#</a></h3>
<p>These <code class="docutils literal notranslate"><span class="pre">PaddedGenerator</span></code> class can be applied to the BART model for the task of filling in mask tokens.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">BartForConditionalGeneration</span><span class="p">,</span> <span class="n">BartTokenizer</span>
<span class="n">bart_name</span> <span class="o">=</span> <span class="s2">"facebook/bart-large"</span>
<span class="n">bart_model</span> <span class="o">=</span> <span class="n">BartForConditionalGeneration</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">bart_name</span><span class="p">)</span>
<span class="n">bart_model</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">max_length</span> <span class="o">=</span> <span class="n">max_decoder_length</span>
<span class="n">bart_tokenizer</span> <span class="o">=</span> <span class="n">BartTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">bart_name</span><span class="p">)</span>
<span class="n">bart_text</span> <span class="o">=</span> <span class="s2">"UN Chief Says There Is No <mask> in Syria"</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># CPU Execution</span>
<span class="n">infer</span><span class="p">(</span><span class="n">bart_model</span><span class="p">,</span> <span class="n">bart_tokenizer</span><span class="p">,</span> <span class="n">bart_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Neuron Execution</span>
<span class="n">paddded_bart</span> <span class="o">=</span> <span class="n">PaddedGenerator</span><span class="o">.</span><span class="n">from_model</span><span class="p">(</span><span class="n">bart_model</span><span class="p">)</span>
<span class="n">bart_neuron</span> <span class="o">=</span> <span class="n">trace</span><span class="p">(</span><span class="n">paddded_bart</span><span class="p">,</span> <span class="n">num_texts</span><span class="p">,</span> <span class="n">num_beams</span><span class="p">,</span> <span class="n">max_decoder_length</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">)</span>
<span class="n">infer</span><span class="p">(</span><span class="n">bart_neuron</span><span class="p">,</span> <span class="n">bart_tokenizer</span><span class="p">,</span> <span class="n">bart_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Pegasus-(Summarization-Task)">
<h3>Pegasus (Summarization Task)<a class="headerlink" href="#Pegasus-(Summarization-Task)" title="Permalink to this headline">#</a></h3>
<p>These <code class="docutils literal notranslate"><span class="pre">PaddedGenerator</span></code> class can be applied to the Pegasus model for summarization.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">PegasusForConditionalGeneration</span><span class="p">,</span> <span class="n">PegasusTokenizer</span>
<span class="n">pegasus_name</span> <span class="o">=</span> <span class="s1">'google/pegasus-xsum'</span>
<span class="n">pegasus_model</span> <span class="o">=</span> <span class="n">PegasusForConditionalGeneration</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">pegasus_name</span><span class="p">)</span>
<span class="n">pegasus_model</span><span class="o">.</span><span class="n">config</span><span class="o">.</span><span class="n">max_length</span> <span class="o">=</span> <span class="n">max_decoder_length</span>
<span class="n">pegasus_tokenizer</span> <span class="o">=</span> <span class="n">PegasusTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="n">pegasus_name</span><span class="p">)</span>
<span class="n">pegasus_text</span> <span class="o">=</span> <span class="s2">"PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires."</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># CPU Execution</span>
<span class="n">infer</span><span class="p">(</span><span class="n">pegasus_model</span><span class="p">,</span> <span class="n">pegasus_tokenizer</span><span class="p">,</span> <span class="n">pegasus_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="c1"># Neuron Execution</span>
<span class="n">paddded_pegasus</span> <span class="o">=</span> <span class="n">PaddedGenerator</span><span class="o">.</span><span class="n">from_model</span><span class="p">(</span><span class="n">pegasus_model</span><span class="p">)</span>
<span class="n">pegasus_neuron</span> <span class="o">=</span> <span class="n">trace</span><span class="p">(</span><span class="n">paddded_pegasus</span><span class="p">,</span> <span class="n">num_texts</span><span class="p">,</span> <span class="n">num_beams</span><span class="p">,</span> <span class="n">max_decoder_length</span><span class="p">,</span> <span class="n">max_encoder_length</span><span class="p">)</span>
<span class="n">infer</span><span class="p">(</span><span class="n">pegasus_neuron</span><span class="p">,</span> <span class="n">pegasus_tokenizer</span><span class="p">,</span> <span class="n">pegasus_text</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-torchserve.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">BERT TorchServe Tutorial</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Utilizing Neuron Capabilities Tutorials</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:46.648Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.rst.txt
|
```
.. _torch-neuron-dataparallel-app-note:
Data Parallel Inference on Torch Neuron
=======================================
.. contents:: Table of Contents
:local:
:depth: 2
Introduction
------------
This guide introduces :func:`torch.neuron.DataParallel`, a Python API that
implements data parallelism on :class:`~torch.jit.ScriptModule` models created by the
:ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/api-compilation-python-api.rst`.
The following sections explain how data parallelism can improve the performance of
inference workloads on Inferentia, including how :func:`torch.neuron.DataParallel`
uses dynamic batching to run inference on variable input sizes. It covers an
overview of the :func:`torch.neuron.DataParallel` module and provides a few
:ref:`example data parallel applications <data_paraellel_examples>`.
Data parallel inference
-------------------------
Data Parallelism is a form of parallelization across multiple devices or cores,
referred to as nodes. Each node contains the same model and parameters, but
data is distributed across the different nodes. By distributing the
data across multiple nodes, data parallelism reduces the total
execution time of large batch size inputs compared to sequential execution.
Data parallelism works best for smaller models in latency sensitive
applications that have large batch size requirements.
torch.neuron.DataParallel
-------------------------
To fully leverage the Inferentia hardware, we want to use all available
NeuronCores. An inf1.xlarge and inf1.2xlarge have four NeuronCores, an
inf1.6xlarge has 16 NeuronCores, and an inf1.24xlarge has 64 NeuronCores.
For maximum performance on Inferentia hardware, we can use
:func:`torch.neuron.DataParallel` to utilize all available NeuronCores.
:func:`torch.neuron.DataParallel` implements data parallelism at the module
level by replicating the Neuron model on all available NeuronCores
and distributing data across the different cores for parallelized inference.
This function is analogous to :class:`~torch.nn.DataParallel` in PyTorch.
:func:`torch.neuron.DataParallel` requires PyTorch >= 1.8.
The following sections provide an overview of some of the features
of :func:`torch.neuron.DataParallel` that enable maximum performance on
Inferentia.
NeuronCore selection
^^^^^^^^^^^^^^^^^^^^
By default, DataParallel will try to use all NeuronCores allocated to the
current process to fully saturate the Inferentia hardware for maximum performance.
It is more efficient to make the batch dimension divisible by the number of
NeuronCores. This will ensure that NeuronCores are not left idle during
parallel inference and the Inferentia hardware is fully utilized.
In some applications, it is advantageous to use a subset of the
available NeuronCores for DataParallel inference. DataParallel has a
``device_ids`` argument that accepts a list of :obj:`int` or ``'nc:#'``
that specify the NeuronCores to use for parallelization. See
:ref:`Specifying NeuronCores <dataparallel_example_specify_ncs>`
for an example of how to use ``device_ids`` argument.
Batch dim
^^^^^^^^^
DataParallel accepts a ``dim`` argument that denotes the batch dimension used
to split the input data for distributed inference. By default,
DataParalell splits the inputs on ``dim = 0`` if the ``dim`` argument is not
specified. For applications with a non-zero batch dim, the ``dim`` argument
can be used to specify the inference-time input batch dimension.
:ref:`DataParallel with dim ! = 0 <data_paraellel_examples>` provides an
example of data parallel inference on inputs with batch dim = 2.
.. _dynamic_batching_description:
Dynamic batching
^^^^^^^^^^^^^^^^
Batch size has a direct impact on model performance. The Inferentia chip is optimized
to run with small batch sizes. This means that a Neuron compiled model can outperform
a GPU model, even if running single digit batch sizes.
As a general best practice, we recommend optimizing your model's throughput by
compiling the model with a small batch size and gradually increasing it to
find the peak throughput on Inferentia.
Dynamic batching is a feature that allows you to use tensor batch sizes that the
Neuron model was not originally compiled against. This is necessary because the
underlying Inferentia hardware will always execute inferences with the batch
size used during compilation. Fixed batch size execution allows tuning the
input batch size for optimal performance. For example, batch size 1 may be
best suited for an ultra-low latency on-demand inference application, while
batch size > 1 can be used to maximize throughput for offline inferencing.
Dynamic batching is implemented by slicing large input tensors into chunks
that match the batch size used during the :func:`torch_neuron.trace` compilation call.
The :func:`torch.neuron.DataParallel` class automatically enables dynamic batching on
eligible models. This allows us to run inference in applications that have
inputs with a variable batch size without needing to recompile the model. See
:ref:`Dynamic batching <dataparallel_example_dynamic_batching>` for an example
of how DataParallel can be used to run inference on inputs with a dynamic batch
size without needing to recompile the model.
Dynamic batching using small batch sizes can result in sub-optimal throughput
because it involves slicing tensors into chunks and iteratively sending data
to the hardware. Using a larger batch size at compilation time can use the
Inferentia hardware more efficiently in order to maximize throughput. You can
test the tradeoff between individual request latency and total throughput by
fine-tuning the input batch size.
Automatic batching in the DataParallel module can be disabled using the
``disable_dynamic_batching()`` function as follows:
.. code-block:: python
>>> model_parallel = torch.neuron.DataParallel(model_neuron)
>>> model_parallel.disable_dynamic_batching()
If dynamic batching is disabled, the compile-time batch size must be equal to
the inference-time batch size divided by the number of NeuronCores.
:ref:`DataParallel with dim != 0 <dataparallel_example_dim_neq_zero>` and
:ref:`Dynamic batching disabled <dataparallel_example_disable_dynamic_batching>`
provide examples of running DataParallel inference with dynamic batching
disabled.
Performance optimizations
^^^^^^^^^^^^^^^^^^^^^^^^^
The DataParallel module has a ``num_workers`` attribute that can be used to
specify the number of worker threads used for multithreaded inference. By
default, ``num_workers = 2 * number of NeuronCores``. This value can be
fine tuned to optimize DataParallel performance.
DataParallel has a ``split_size`` attribute that dictates the size of the input
chunks that are distributed to each NeuronCore. By default,
``split_size = max(1, input.shape[dim] // number of NeuronCores)``. This value
can be modified to optimally match the inference input chunk size with the
compile-time batch size.
.. _data_paraellel_examples:
Examples
--------
The following sections provide example usages of the
:func:`torch.neuron.DataParallel` module.
.. _dataparallel_example_default:
Default usage
^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-default.rst
.. _dataparallel_example_specify_ncs:
Specifying NeuronCores
^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-specify-ncs.rst
.. _dataparallel_example_dim_neq_zero:
DataParallel with dim != 0
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dim-neq-zero.rst
.. _dataparallel_example_dynamic_batching:
Dynamic batching
^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dynamic-batching.rst
.. _dataparallel_example_disable_dynamic_batching:
Dynamic batching disabled
^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-disable-dynamic-batching.rst
Full tutorial with torch.neuron.DataParallel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For an end-to-end tutorial that uses DataParallel, see the
:ref:`PyTorch Resnet Tutorial </src/examples/pytorch/resnet50.ipynb>`.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-neuron-dataparallel-app-note:
Data Parallel Inference on Torch Neuron
=======================================
.. contents:: Table of Contents
:local:
:depth: 2
Introduction
------------
This guide introduces :func:`torch.neuron.DataParallel`, a Python API that
implements data parallelism on :class:`~torch.jit.ScriptModule` models created by the
:ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/api-compilation-python-api.rst`.
The following sections explain how data parallelism can improve the performance of
inference workloads on Inferentia, including how :func:`torch.neuron.DataParallel`
uses dynamic batching to run inference on variable input sizes. It covers an
overview of the :func:`torch.neuron.DataParallel` module and provides a few
:ref:`example data parallel applications <data_paraellel_examples>`.
Data parallel inference
-------------------------
Data Parallelism is a form of parallelization across multiple devices or cores,
referred to as nodes. Each node contains the same model and parameters, but
data is distributed across the different nodes. By distributing the
data across multiple nodes, data parallelism reduces the total
execution time of large batch size inputs compared to sequential execution.
Data parallelism works best for smaller models in latency sensitive
applications that have large batch size requirements.
torch.neuron.DataParallel
-------------------------
To fully leverage the Inferentia hardware, we want to use all available
NeuronCores. An inf1.xlarge and inf1.2xlarge have four NeuronCores, an
inf1.6xlarge has 16 NeuronCores, and an inf1.24xlarge has 64 NeuronCores.
For maximum performance on Inferentia hardware, we can use
:func:`torch.neuron.DataParallel` to utilize all available NeuronCores.
:func:`torch.neuron.DataParallel` implements data parallelism at the module
level by replicating the Neuron model on all available NeuronCores
and distributing data across the different cores for parallelized inference.
This function is analogous to :class:`~torch.nn.DataParallel` in PyTorch.
:func:`torch.neuron.DataParallel` requires PyTorch >= 1.8.
The following sections provide an overview of some of the features
of :func:`torch.neuron.DataParallel` that enable maximum performance on
Inferentia.
NeuronCore selection
^^^^^^^^^^^^^^^^^^^^
By default, DataParallel will try to use all NeuronCores allocated to the
current process to fully saturate the Inferentia hardware for maximum performance.
It is more efficient to make the batch dimension divisible by the number of
NeuronCores. This will ensure that NeuronCores are not left idle during
parallel inference and the Inferentia hardware is fully utilized.
In some applications, it is advantageous to use a subset of the
available NeuronCores for DataParallel inference. DataParallel has a
``device_ids`` argument that accepts a list of :obj:`int` or ``'nc:#'``
that specify the NeuronCores to use for parallelization. See
:ref:`Specifying NeuronCores <dataparallel_example_specify_ncs>`
for an example of how to use ``device_ids`` argument.
Batch dim
^^^^^^^^^
DataParallel accepts a ``dim`` argument that denotes the batch dimension used
to split the input data for distributed inference. By default,
DataParalell splits the inputs on ``dim = 0`` if the ``dim`` argument is not
specified. For applications with a non-zero batch dim, the ``dim`` argument
can be used to specify the inference-time input batch dimension.
:ref:`DataParallel with dim ! = 0 <data_paraellel_examples>` provides an
example of data parallel inference on inputs with batch dim = 2.
.. _dynamic_batching_description:
Dynamic batching
^^^^^^^^^^^^^^^^
Batch size has a direct impact on model performance. The Inferentia chip is optimized
to run with small batch sizes. This means that a Neuron compiled model can outperform
a GPU model, even if running single digit batch sizes.
As a general best practice, we recommend optimizing your model's throughput by
compiling the model with a small batch size and gradually increasing it to
find the peak throughput on Inferentia.
Dynamic batching is a feature that allows you to use tensor batch sizes that the
Neuron model was not originally compiled against. This is necessary because the
underlying Inferentia hardware will always execute inferences with the batch
size used during compilation. Fixed batch size execution allows tuning the
input batch size for optimal performance. For example, batch size 1 may be
best suited for an ultra-low latency on-demand inference application, while
batch size > 1 can be used to maximize throughput for offline inferencing.
Dynamic batching is implemented by slicing large input tensors into chunks
that match the batch size used during the :func:`torch_neuron.trace` compilation call.
The :func:`torch.neuron.DataParallel` class automatically enables dynamic batching on
eligible models. This allows us to run inference in applications that have
inputs with a variable batch size without needing to recompile the model. See
:ref:`Dynamic batching <dataparallel_example_dynamic_batching>` for an example
of how DataParallel can be used to run inference on inputs with a dynamic batch
size without needing to recompile the model.
Dynamic batching using small batch sizes can result in sub-optimal throughput
because it involves slicing tensors into chunks and iteratively sending data
to the hardware. Using a larger batch size at compilation time can use the
Inferentia hardware more efficiently in order to maximize throughput. You can
test the tradeoff between individual request latency and total throughput by
fine-tuning the input batch size.
Automatic batching in the DataParallel module can be disabled using the
``disable_dynamic_batching()`` function as follows:
.. code-block:: python
>>> model_parallel = torch.neuron.DataParallel(model_neuron)
>>> model_parallel.disable_dynamic_batching()
If dynamic batching is disabled, the compile-time batch size must be equal to
the inference-time batch size divided by the number of NeuronCores.
:ref:`DataParallel with dim != 0 <dataparallel_example_dim_neq_zero>` and
:ref:`Dynamic batching disabled <dataparallel_example_disable_dynamic_batching>`
provide examples of running DataParallel inference with dynamic batching
disabled.
Performance optimizations
^^^^^^^^^^^^^^^^^^^^^^^^^
The DataParallel module has a ``num_workers`` attribute that can be used to
specify the number of worker threads used for multithreaded inference. By
default, ``num_workers = 2 * number of NeuronCores``. This value can be
fine tuned to optimize DataParallel performance.
DataParallel has a ``split_size`` attribute that dictates the size of the input
chunks that are distributed to each NeuronCore. By default,
``split_size = max(1, input.shape[dim] // number of NeuronCores)``. This value
can be modified to optimally match the inference input chunk size with the
compile-time batch size.
.. _data_paraellel_examples:
Examples
--------
The following sections provide example usages of the
:func:`torch.neuron.DataParallel` module.
.. _dataparallel_example_default:
Default usage
^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-default.rst
.. _dataparallel_example_specify_ncs:
Specifying NeuronCores
^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-specify-ncs.rst
.. _dataparallel_example_dim_neq_zero:
DataParallel with dim != 0
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dim-neq-zero.rst
.. _dataparallel_example_dynamic_batching:
Dynamic batching
^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-dynamic-batching.rst
.. _dataparallel_example_disable_dynamic_batching:
Dynamic batching disabled
^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: /frameworks/torch/torch-neuron/torch-neuron-dataparallel-example-disable-dynamic-batching.rst
Full tutorial with torch.neuron.DataParallel
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For an end-to-end tutorial that uses DataParallel, see the
:ref:`PyTorch Resnet Tutorial </src/examples/pytorch/resnet50.ipynb>`.
</pre></body></html>
|
2023-09-29T20:54:46.686Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/guides/torch-lstm-support.rst.txt
|
```
.. _torch_neuron_lstm_support:
Developer Guide - PyTorch Neuron (``torch-neuron``) |LSTM| Support
==================================================================
The `torch-neuron` package can support |LSTM| operations and yield
high performance on both fixed-length and variable-length sequences. Most
network configurations can be supported, with the exception of those that
require |PackedSequence| usage outside of |LSTM| or |pad_packed_sequence|
operations. Neuron must guarantee that the shapes can remain fixed throughout
the network.
The following sections describe which scenarios can and cannot be supported.
Supported Usage
---------------
Fixed-Length Sequences
~~~~~~~~~~~~~~~~~~~~~~
In normal usage of an |LSTM|, the inputs and outputs are expected to be a fixed
size sequence length. This is the most basic usage of an |LSTM| but may not be
applicable to applications where the input sequence length may vary.
.. code-block:: python
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs):
output, (ht, ct) = self.lstm(inputs)
return output, (ht, ct)
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
# Trace
torch_neuron.trace(Network(), (inputs,))
Packed Input, Padded Output, *Pre-Sorted* Inputs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A common usage of an |LSTM| is when the input sequence sizes vary according
to an input sequence lengths (such as tokens).
For example, the following sentences could result in two different
sequence lengths after tokenization:
.. code-block:: python
# Input
text = [
'Hello, sailor',
'Example',
]
# ... Tokenization ...
# Result
tokens = [
[101, 7592, 1010, 11803, 102],
[101, 2742, 102, 0, 0],
]
lengths = [5, 3]
Because the lengths are different, the final |LSTM| state will be dependent upon
the lengths of each sequence in the batch. Torch provides a way to deal with
these types of sequences by densely packing batches into a |PackedSequence|. The
most common way this is constructed is by using the |pack_padded_sequence|
utility function prior to feeding inputs into the |LSTM|.
Packing the above sequences would result in the following data and batch
size tensors.
.. code-block:: python
data = [101, 101, 7592, 2742, 1010, 102, 11803, 102]
batch_sizes = [2, 2, 2, 1, 1]
In addition to correctly computing final |LSTM| state, using a packed
sequence instead of a padded sequence also improves model performance on CPU.
On Neuron, where computation is fixed to the maximum length ahead of time,
**this is does not improve performance**.
When an |LSTM| is processing a |PackedSequence|, it must do so in a descending
sorted length order. To ensure that sequences are sorted, |pack_padded_sequence|
provides an ``enforce_sorted`` flag. When ``enforce_sorted`` is ``True``, the
input is *already expected* to contain sequences sorted by length in a
decreasing order along the batch dimension. Note that this must be enforced in
the application-level code but is only relevant when batch size > 1.
The following network can compile successfully because the input and output
to the network are guaranteed to be a fixed shape. The input shape is expected
to be a padded tensor and the output tensor is expected to be padded to the
maximum sequence length using the |pad_packed_sequence| function call:
.. code-block:: python
:emphasize-lines: 14
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=True,
)
packed_result, (ht, ct) = self.lstm(packed_input)
padded_result, _ = torch.nn.utils.rnn.pad_packed_sequence(packed_result)
return padded_result, ht, ct
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
lengths = torch.tensor([seq_len] * batch_size)
# Trace
torch_neuron.trace(Network(), (inputs, lengths))
Packed Input, Padded Output, *Unsorted* Inputs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When ``enforce_sorted`` is ``False``, the input will be sorted unconditionally.
This causes some CPU overhead on Neuron because unsupported operators will be
inserted into the graph such as ``aten::sort`` and ``aten::scatter_``. The
``aten::lstm`` operation can still be supported, but it will be less efficient
than when ``enforce_sorted`` is ``True``.
The following code is able to be traced, but results in the sorting
operations running on CPU. This is not problematic in this case because the
``aten::sort`` and ``aten::scatter_`` are executed on CPU at the very beginning
of the graph just prior to Neuron execution.
Like the previous example, the call to |pad_packed_sequence| ensures that the
output is a fixed-shape based on the maximum sequence length.
.. code-block:: python
:emphasize-lines: 14
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=False,
)
packed_result, (ht, ct) = self.lstm(packed_input)
padded_result, _ = torch.nn.utils.rnn.pad_packed_sequence(packed_result)
return padded_result, ht, ct
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
lengths = torch.tensor([seq_len] * batch_size)
# Trace
trace = torch_neuron.trace(Network(), (inputs, lengths))
Packed Inputs, Final Hidden & Cell State Only
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When **only** the final |LSTM| hidden & cell state is used, it does not
matter if the inputs are packed or unpacked since these state
tensors will not vary in size.
.. code-block:: python
:emphasize-lines: 16,17
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=True,
)
packed_output, (ht, ct) = self.lstm(packed_input)
return ht, ct
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
lengths = torch.tensor([seq_len] * batch_size)
# Trace
trace = torch_neuron.trace(Network(), (inputs, lengths))
Note that when the ``packed_output`` is unused, it does not need to be passed
to the |pad_packed_sequence| to enable the |LSTM| to be compiled.
Unsupported Usage
-----------------
Neuron does not support the use of a |PackedSequence| outside of the |LSTM|
operation and the |pad_packed_sequence| operation. This is because the shape of
a |PackedSequence| can vary depending on the input data. This is incompatible
with the Neuron restriction that all tensor sizes must be known at compilation
time. When a |PackedSequence| is used only by an |LSTM| or |pad_packed_sequence|
operation, Neuron *can guarantee* the size of the intermediary tensors by
padding on behalf of the application.
This means that If the |PackedSequence| is either used by a different operation
or returned from the network this would result in all of the |LSTM| operations to
be executed on CPU or the network compilation will fail.
|PackedSequence| Returned
~~~~~~~~~~~~~~~~~~~~~~~~~
The following is unsupported because the |PackedSequence| result of the |LSTM|
is returned by the network:
.. code-block:: python
:emphasize-lines: 14
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=False,
)
packed_result, (ht, ct) = self.lstm(packed_input)
return packed_result.data, ht, ct
**Behavior**: In this case, compilation fails and the following warning is
generated:
.. code-block:: text
Operator "aten::lstm" consuming a PackedSequence input can only be supported when its corresponding PackedSequence output is unused or unpacked using "aten::_pad_packed_input". Found usage by "prim::Return"
**Resolution**: To avoid this error, the ``packed_result`` should be padded
prior to being returned from the network by using |pad_packed_sequence|
Invalid |PackedSequence| Usage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is unsupported because the |PackedSequence| result of the |LSTM|
is used by a non-LSTM operator:
.. code-block:: python
:emphasize-lines: 14
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=False,
)
packed_result, (ht, ct) = self.lstm(packed_input)
return torch.max(packed_result.data)
**Behavior**: In this case, compilation fails and the following warning is
generated:
.. code-block:: text
Operator "aten::lstm" consuming a PackedSequence input can only be supported when its corresponding PackedSequence output is unused or unpacked using "aten::_pad_packed_input". Found usage by "aten::max"
**Resolution**: To avoid this error, the ``packed_result`` should be padded
prior to being used in the :func:`~torch.max` from the network by
using |pad_packed_sequence|.
.. |LSTM| replace:: :class:`~torch.nn.LSTM`
.. |PackedSequence| replace:: :class:`~torch.nn.utils.rnn.PackedSequence`
.. |pack_padded_sequence| replace:: :func:`~torch.nn.utils.rnn.pack_padded_sequence`
.. |pad_packed_sequence| replace:: :func:`~torch.nn.utils.rnn.pad_packed_sequence`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch_neuron_lstm_support:
Developer Guide - PyTorch Neuron (``torch-neuron``) |LSTM| Support
==================================================================
The `torch-neuron` package can support |LSTM| operations and yield
high performance on both fixed-length and variable-length sequences. Most
network configurations can be supported, with the exception of those that
require |PackedSequence| usage outside of |LSTM| or |pad_packed_sequence|
operations. Neuron must guarantee that the shapes can remain fixed throughout
the network.
The following sections describe which scenarios can and cannot be supported.
Supported Usage
---------------
Fixed-Length Sequences
~~~~~~~~~~~~~~~~~~~~~~
In normal usage of an |LSTM|, the inputs and outputs are expected to be a fixed
size sequence length. This is the most basic usage of an |LSTM| but may not be
applicable to applications where the input sequence length may vary.
.. code-block:: python
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs):
output, (ht, ct) = self.lstm(inputs)
return output, (ht, ct)
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
# Trace
torch_neuron.trace(Network(), (inputs,))
Packed Input, Padded Output, *Pre-Sorted* Inputs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A common usage of an |LSTM| is when the input sequence sizes vary according
to an input sequence lengths (such as tokens).
For example, the following sentences could result in two different
sequence lengths after tokenization:
.. code-block:: python
# Input
text = [
'Hello, sailor',
'Example',
]
# ... Tokenization ...
# Result
tokens = [
[101, 7592, 1010, 11803, 102],
[101, 2742, 102, 0, 0],
]
lengths = [5, 3]
Because the lengths are different, the final |LSTM| state will be dependent upon
the lengths of each sequence in the batch. Torch provides a way to deal with
these types of sequences by densely packing batches into a |PackedSequence|. The
most common way this is constructed is by using the |pack_padded_sequence|
utility function prior to feeding inputs into the |LSTM|.
Packing the above sequences would result in the following data and batch
size tensors.
.. code-block:: python
data = [101, 101, 7592, 2742, 1010, 102, 11803, 102]
batch_sizes = [2, 2, 2, 1, 1]
In addition to correctly computing final |LSTM| state, using a packed
sequence instead of a padded sequence also improves model performance on CPU.
On Neuron, where computation is fixed to the maximum length ahead of time,
**this is does not improve performance**.
When an |LSTM| is processing a |PackedSequence|, it must do so in a descending
sorted length order. To ensure that sequences are sorted, |pack_padded_sequence|
provides an ``enforce_sorted`` flag. When ``enforce_sorted`` is ``True``, the
input is *already expected* to contain sequences sorted by length in a
decreasing order along the batch dimension. Note that this must be enforced in
the application-level code but is only relevant when batch size > 1.
The following network can compile successfully because the input and output
to the network are guaranteed to be a fixed shape. The input shape is expected
to be a padded tensor and the output tensor is expected to be padded to the
maximum sequence length using the |pad_packed_sequence| function call:
.. code-block:: python
:emphasize-lines: 14
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=True,
)
packed_result, (ht, ct) = self.lstm(packed_input)
padded_result, _ = torch.nn.utils.rnn.pad_packed_sequence(packed_result)
return padded_result, ht, ct
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
lengths = torch.tensor([seq_len] * batch_size)
# Trace
torch_neuron.trace(Network(), (inputs, lengths))
Packed Input, Padded Output, *Unsorted* Inputs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When ``enforce_sorted`` is ``False``, the input will be sorted unconditionally.
This causes some CPU overhead on Neuron because unsupported operators will be
inserted into the graph such as ``aten::sort`` and ``aten::scatter_``. The
``aten::lstm`` operation can still be supported, but it will be less efficient
than when ``enforce_sorted`` is ``True``.
The following code is able to be traced, but results in the sorting
operations running on CPU. This is not problematic in this case because the
``aten::sort`` and ``aten::scatter_`` are executed on CPU at the very beginning
of the graph just prior to Neuron execution.
Like the previous example, the call to |pad_packed_sequence| ensures that the
output is a fixed-shape based on the maximum sequence length.
.. code-block:: python
:emphasize-lines: 14
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=False,
)
packed_result, (ht, ct) = self.lstm(packed_input)
padded_result, _ = torch.nn.utils.rnn.pad_packed_sequence(packed_result)
return padded_result, ht, ct
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
lengths = torch.tensor([seq_len] * batch_size)
# Trace
trace = torch_neuron.trace(Network(), (inputs, lengths))
Packed Inputs, Final Hidden & Cell State Only
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When **only** the final |LSTM| hidden & cell state is used, it does not
matter if the inputs are packed or unpacked since these state
tensors will not vary in size.
.. code-block:: python
:emphasize-lines: 16,17
import torch
import torch_neuron
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=True,
)
packed_output, (ht, ct) = self.lstm(packed_input)
return ht, ct
# Example Inputs
seq_len, batch_size, input_size = 5, 2, 3
inputs = torch.rand(seq_len, batch_size, input_size)
lengths = torch.tensor([seq_len] * batch_size)
# Trace
trace = torch_neuron.trace(Network(), (inputs, lengths))
Note that when the ``packed_output`` is unused, it does not need to be passed
to the |pad_packed_sequence| to enable the |LSTM| to be compiled.
Unsupported Usage
-----------------
Neuron does not support the use of a |PackedSequence| outside of the |LSTM|
operation and the |pad_packed_sequence| operation. This is because the shape of
a |PackedSequence| can vary depending on the input data. This is incompatible
with the Neuron restriction that all tensor sizes must be known at compilation
time. When a |PackedSequence| is used only by an |LSTM| or |pad_packed_sequence|
operation, Neuron *can guarantee* the size of the intermediary tensors by
padding on behalf of the application.
This means that If the |PackedSequence| is either used by a different operation
or returned from the network this would result in all of the |LSTM| operations to
be executed on CPU or the network compilation will fail.
|PackedSequence| Returned
~~~~~~~~~~~~~~~~~~~~~~~~~
The following is unsupported because the |PackedSequence| result of the |LSTM|
is returned by the network:
.. code-block:: python
:emphasize-lines: 14
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=False,
)
packed_result, (ht, ct) = self.lstm(packed_input)
return packed_result.data, ht, ct
**Behavior**: In this case, compilation fails and the following warning is
generated:
.. code-block:: text
Operator "aten::lstm" consuming a PackedSequence input can only be supported when its corresponding PackedSequence output is unused or unpacked using "aten::_pad_packed_input". Found usage by "prim::Return"
**Resolution**: To avoid this error, the ``packed_result`` should be padded
prior to being returned from the network by using |pad_packed_sequence|
Invalid |PackedSequence| Usage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following is unsupported because the |PackedSequence| result of the |LSTM|
is used by a non-LSTM operator:
.. code-block:: python
:emphasize-lines: 14
class Network(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = torch.nn.LSTM(input_size=3, hidden_size=7)
def forward(self, inputs, lengths):
packed_input = torch.nn.utils.rnn.pack_padded_sequence(
inputs,
lengths=lengths,
enforce_sorted=False,
)
packed_result, (ht, ct) = self.lstm(packed_input)
return torch.max(packed_result.data)
**Behavior**: In this case, compilation fails and the following warning is
generated:
.. code-block:: text
Operator "aten::lstm" consuming a PackedSequence input can only be supported when its corresponding PackedSequence output is unused or unpacked using "aten::_pad_packed_input". Found usage by "aten::max"
**Resolution**: To avoid this error, the ``packed_result`` should be padded
prior to being used in the :func:`~torch.max` from the network by
using |pad_packed_sequence|.
.. |LSTM| replace:: :class:`~torch.nn.LSTM`
.. |PackedSequence| replace:: :class:`~torch.nn.utils.rnn.PackedSequence`
.. |pack_padded_sequence| replace:: :func:`~torch.nn.utils.rnn.pack_padded_sequence`
.. |pad_packed_sequence| replace:: :func:`~torch.nn.utils.rnn.pad_packed_sequence`
</pre></body></html>
|
2023-09-29T20:54:46.911Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/troubleshooting-guide.rst.txt
|
```
.. _pytorch-neuron-inference-troubleshooting:
Troubleshooting Guide for PyTorch Neuron (``torch-neuron``)
===========================================================
General Torch-Neuron issues
---------------------------
If you see an error about "Unknown builtin op: neuron::forward_1" like below, please ensure that import line "import torch_neuron" (to register the Neuron custom operation) is in the inference script before using torch.jit.load.
::
Unknown builtin op: neuron::forward_1.
Could not find any similar ops to neuron::forward_1. This op may not exist or may not be currently supported in TorchScript.
TorchVision related issues
--------------------------
If you encounter an error like below, it is because latest torchvision
version >= 0.7 is not compatible with Torch-Neuron 1.5.1. Please
downgrade torchvision to version 0.6.1:
::
E AttributeError: module 'torch.jit' has no attribute '_script_if_tracing'
2GB protobuf limit related issues
---------------------------------
If you encounter an error like below, it is because the model size is larger than 2GB.
To compile such large models, use the :ref:`separate_weights=True <torch_neuron_trace_api>` flag. Note,
ensure that you have the latest version of compiler installed to support this flag.
You can upgrade neuron-cc using
:code:`python3 -m pip install neuron-cc[tensorflow] -U --force --extra-index-url=https://pip.repos.neuron.amazonaws.com`
::
E google.protobuf.message.DecodeError: Error parsing message with type 'tensorflow.GraphDef'
torch.jit.trace issues
----------------------
The :ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/api-compilation-python-api.rst`
uses the PyTorch :func:`torch.jit.trace` function to generate
:class:`~torch.jit.ScriptModule` models for execution on Inferentia. Due to that,
to execute your PyTorch model on Inferentia it must be torch-jit-traceable,
otherwise you need to make sure your model is torch-jit-traceable. You can try
modifying your underlying PyTorch model code to make it traceable. If it's not
possible to change your model code, you can :ref:`write a wrapper around your
model <wrapping-non-traceable-models>` that makes it torch-jit-traceable to
compile it for Inferentia.
Please visit :func:`torch.jit.trace` to review the properties that a model must
have to be torch-jit-traceable. The PyTorch-Neuron trace API
:func:`torch_neuron.trace` accepts :code:`**kwargs` for :func:`torch.jit.trace`.
For example, you can use the :code:`strict=False` flag to
:ref:`compile models with dictionary outputs <compiling-models-with-kwargs>`.
.. _wrapping-non-traceable-models:
Compiling models with outputs that are not torch-jit-traceable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable compilation of models with non torch-jit-traceable outputs, you can
use a technique that involves writing a wrapper that converts the model's
output into a form that is torch-jit-traceable. You can then compile the
wrapped model for Inferentia using :func:`torch_neuron.trace`.
The following example uses a wrapper to compile a model with non
torch-jit-traceable outputs. This model cannot be compiled for Inferentia in
its current form because it outputs a list of tuples and tensors, which is not
torch-jit-traceable.
.. code-block:: python
import torch
import torch_neuron
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
a = self.conv(x) + 1
b = self.conv(x) + 2
c = self.conv(x) + 3
# An output that is a list of tuples and tensors is not torch-traceable
return [(a, b), c]
model = Model()
model.eval()
inputs = torch.rand(1, 1, 3, 3)
# Try to compile the model
model_neuron = torch.neuron.trace(model, inputs) # ERROR: This cannot be traced, we must change the output format
To compile this model for Inferentia, we can write a wrapper around the model
to convert its outputs into a tuple of tensors, which is torch-jit-traceable.
.. code-block:: python
class NeuronCompatibilityWrapper(nn.Module):
def __init__(self):
super(NeuronCompatibilityWrapper, self).__init__()
self.model = Model()
def forward(self, x):
out = self.model(x)
# An output that is a tuple of tuples and tensors is torch-jit-traceable
return tuple(out)
Now, we can successfully compile the model for Inferentia using the
:code:`NeuronCompatibilityWrapper` wrapper as follows:
.. code-block:: python
model = NeuronCompatibilityWrapper()
model.eval()
# Compile the traceable wrapped model
model_neuron = torch.neuron.trace(model, inputs)
If the model's outputs must be in the original form, a second wrapper can be
used to transform the outputs after compilation for Inferentia. The following
example uses the :code:`OutputFormatWrapper` wrapper to convert the compiled
model's output back into the original form of a list of tuples and tensors.
.. code-block:: python
class OutputFormatWrapper(nn.Module):
def __init__(self):
super(OutputFormatWrapper, self).__init__()
self.traceable_model = NeuronCompatibilityWrapper()
def forward(self, x):
out = self.traceable_model(x)
# Return the output in the original format of Model()
return list(out)
model = OutputFormatWrapper()
model.eval()
# Compile the traceable wrapped model
model.traceable_model = torch.neuron.trace(model.traceable_model, inputs)
Compiling a submodule in a model that is not torch-jit-traceable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following example shows how to compile a submodule that is part of a non
torch-jit-traceable model. In this example, the top-level model :code:`Outer`
uses a dynamic flag, which is not torch-jit-traceable. However, the
submodule :code:`Inner` is torch-jit-traceable and can be compiled for
Inferentia.
.. code-block:: python
import torch
import torch_neuron
import torch.nn as nn
class Inner(nn.Module) :
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
return self.conv(x) + 1
class Outer(nn.Module):
def __init__(self):
super().__init__()
self.inner = Inner()
def forward(self, x, add_offset: bool = False):
base = self.inner(x)
if add_offset:
return base + 1
return base
model = Outer()
inputs = torch.rand(1, 1, 3, 3)
# Compile the traceable wrapped submodule
model.inner = torch.neuron.trace(model.inner, inputs)
# TorchScript the model for serialization
script = torch.jit.script(model)
torch.jit.save(script, 'model.pt')
loaded = torch.jit.load('model.pt')
Alternatively, for usage scenarios in which the model configuration is static
during inference, the dynamic flags can be hardcoded in a wrapper to make
the model torch-jit-traceable and enable compiling the entire model for Inferentia.
In this example, we assume the :code:`add_offset` flag is always
:code:`True` during inference, so we can hardcode this conditional path in the
:code:`Static` wrapper to remove the dynmaic behavior and compile the entire
model for Inferentia.
.. code-block:: python
class Static(nn.Module):
def __init__(self):
super().__init__()
self.outer = Outer()
def forward(self, x):
# hardcode `add_offset=True`
output = self.outer(x, add_offset=True)
return output
model = Static()
# We can now compile the entire model because `add_offset=True` is hardcoded in the Static wrapper
model_neuron = torch.neuron.trace(model, inputs)
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuron-inference-troubleshooting:
Troubleshooting Guide for PyTorch Neuron (``torch-neuron``)
===========================================================
General Torch-Neuron issues
---------------------------
If you see an error about "Unknown builtin op: neuron::forward_1" like below, please ensure that import line "import torch_neuron" (to register the Neuron custom operation) is in the inference script before using torch.jit.load.
::
Unknown builtin op: neuron::forward_1.
Could not find any similar ops to neuron::forward_1. This op may not exist or may not be currently supported in TorchScript.
TorchVision related issues
--------------------------
If you encounter an error like below, it is because latest torchvision
version >= 0.7 is not compatible with Torch-Neuron 1.5.1. Please
downgrade torchvision to version 0.6.1:
::
E AttributeError: module 'torch.jit' has no attribute '_script_if_tracing'
2GB protobuf limit related issues
---------------------------------
If you encounter an error like below, it is because the model size is larger than 2GB.
To compile such large models, use the :ref:`separate_weights=True <torch_neuron_trace_api>` flag. Note,
ensure that you have the latest version of compiler installed to support this flag.
You can upgrade neuron-cc using
:code:`python3 -m pip install neuron-cc[tensorflow] -U --force --extra-index-url=https://pip.repos.neuron.amazonaws.com`
::
E google.protobuf.message.DecodeError: Error parsing message with type 'tensorflow.GraphDef'
torch.jit.trace issues
----------------------
The :ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/api-compilation-python-api.rst`
uses the PyTorch :func:`torch.jit.trace` function to generate
:class:`~torch.jit.ScriptModule` models for execution on Inferentia. Due to that,
to execute your PyTorch model on Inferentia it must be torch-jit-traceable,
otherwise you need to make sure your model is torch-jit-traceable. You can try
modifying your underlying PyTorch model code to make it traceable. If it's not
possible to change your model code, you can :ref:`write a wrapper around your
model <wrapping-non-traceable-models>` that makes it torch-jit-traceable to
compile it for Inferentia.
Please visit :func:`torch.jit.trace` to review the properties that a model must
have to be torch-jit-traceable. The PyTorch-Neuron trace API
:func:`torch_neuron.trace` accepts :code:`**kwargs` for :func:`torch.jit.trace`.
For example, you can use the :code:`strict=False` flag to
:ref:`compile models with dictionary outputs <compiling-models-with-kwargs>`.
.. _wrapping-non-traceable-models:
Compiling models with outputs that are not torch-jit-traceable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To enable compilation of models with non torch-jit-traceable outputs, you can
use a technique that involves writing a wrapper that converts the model's
output into a form that is torch-jit-traceable. You can then compile the
wrapped model for Inferentia using :func:`torch_neuron.trace`.
The following example uses a wrapper to compile a model with non
torch-jit-traceable outputs. This model cannot be compiled for Inferentia in
its current form because it outputs a list of tuples and tensors, which is not
torch-jit-traceable.
.. code-block:: python
import torch
import torch_neuron
import torch.nn as nn
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
a = self.conv(x) + 1
b = self.conv(x) + 2
c = self.conv(x) + 3
# An output that is a list of tuples and tensors is not torch-traceable
return [(a, b), c]
model = Model()
model.eval()
inputs = torch.rand(1, 1, 3, 3)
# Try to compile the model
model_neuron = torch.neuron.trace(model, inputs) # ERROR: This cannot be traced, we must change the output format
To compile this model for Inferentia, we can write a wrapper around the model
to convert its outputs into a tuple of tensors, which is torch-jit-traceable.
.. code-block:: python
class NeuronCompatibilityWrapper(nn.Module):
def __init__(self):
super(NeuronCompatibilityWrapper, self).__init__()
self.model = Model()
def forward(self, x):
out = self.model(x)
# An output that is a tuple of tuples and tensors is torch-jit-traceable
return tuple(out)
Now, we can successfully compile the model for Inferentia using the
:code:`NeuronCompatibilityWrapper` wrapper as follows:
.. code-block:: python
model = NeuronCompatibilityWrapper()
model.eval()
# Compile the traceable wrapped model
model_neuron = torch.neuron.trace(model, inputs)
If the model's outputs must be in the original form, a second wrapper can be
used to transform the outputs after compilation for Inferentia. The following
example uses the :code:`OutputFormatWrapper` wrapper to convert the compiled
model's output back into the original form of a list of tuples and tensors.
.. code-block:: python
class OutputFormatWrapper(nn.Module):
def __init__(self):
super(OutputFormatWrapper, self).__init__()
self.traceable_model = NeuronCompatibilityWrapper()
def forward(self, x):
out = self.traceable_model(x)
# Return the output in the original format of Model()
return list(out)
model = OutputFormatWrapper()
model.eval()
# Compile the traceable wrapped model
model.traceable_model = torch.neuron.trace(model.traceable_model, inputs)
Compiling a submodule in a model that is not torch-jit-traceable
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following example shows how to compile a submodule that is part of a non
torch-jit-traceable model. In this example, the top-level model :code:`Outer`
uses a dynamic flag, which is not torch-jit-traceable. However, the
submodule :code:`Inner` is torch-jit-traceable and can be compiled for
Inferentia.
.. code-block:: python
import torch
import torch_neuron
import torch.nn as nn
class Inner(nn.Module) :
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(1, 1, 3)
def forward(self, x):
return self.conv(x) + 1
class Outer(nn.Module):
def __init__(self):
super().__init__()
self.inner = Inner()
def forward(self, x, add_offset: bool = False):
base = self.inner(x)
if add_offset:
return base + 1
return base
model = Outer()
inputs = torch.rand(1, 1, 3, 3)
# Compile the traceable wrapped submodule
model.inner = torch.neuron.trace(model.inner, inputs)
# TorchScript the model for serialization
script = torch.jit.script(model)
torch.jit.save(script, 'model.pt')
loaded = torch.jit.load('model.pt')
Alternatively, for usage scenarios in which the model configuration is static
during inference, the dynamic flags can be hardcoded in a wrapper to make
the model torch-jit-traceable and enable compiling the entire model for Inferentia.
In this example, we assume the :code:`add_offset` flag is always
:code:`True` during inference, so we can hardcode this conditional path in the
:code:`Static` wrapper to remove the dynmaic behavior and compile the entire
model for Inferentia.
.. code-block:: python
class Static(nn.Module):
def __init__(self):
super().__init__()
self.outer = Outer()
def forward(self, x):
# hardcode `add_offset=True`
output = self.outer(x, add_offset=True)
return output
model = Static()
# We can now compile the entire model because `add_offset=True` is hardcoded in the Static wrapper
model_neuron = torch.neuron.trace(model, inputs)
</pre></body></html>
|
2023-09-29T20:54:47.262Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/training-torch-neuronx.rst.txt
|
```
.. _training-torch-neuronx:
Training (``torch-neuronx``)
============================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx>
Additional Examples </frameworks/torch/torch-neuronx/additional-examples-training>
API Reference Guide </frameworks/torch/torch-neuronx/api-reference-guide/training/index>
Developer Guide </frameworks/torch/torch-neuronx/programming-guide/training/index>
Misc </frameworks/torch/torch-neuronx/misc-training>
.. include:: training-torch-neuronx.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _training-torch-neuronx:
Training (``torch-neuronx``)
============================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx>
Additional Examples </frameworks/torch/torch-neuronx/additional-examples-training>
API Reference Guide </frameworks/torch/torch-neuronx/api-reference-guide/training/index>
Developer Guide </frameworks/torch/torch-neuronx/programming-guide/training/index>
Misc </frameworks/torch/torch-neuronx/misc-training>
.. include:: training-torch-neuronx.txt
</pre></body></html>
|
2023-09-29T20:54:47.306Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/torch-neuron/bucketing-app-note.rst.txt
|
```
.. _bucketing_app_note:
Running inference on variable input shapes with bucketing
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Introduction
------------
With Inferentia, the shape of every input must be fixed at compile time. For
applications that require multiple input sizes, we recommend using padding or
bucketing techniques. Padding requires you to compile your model with the
largest expected input size and pad every input to this maximum size. If the
performance of your model using padding is not within your targets, you can
consider implementing bucketing.
This guide introduces bucketing, a technique to run inference on inputs with
variable shapes on Inferentia. The following sections explain how bucketing can
improve the performance of inference workloads on Inferentia. It covers an
overview of how bucketing works and provides examples of using bucketing in
:ref:`computer vision <bucketing_example_cv>` and
:ref:`natural language processing<bucketing_example_nlp>` applications.
Applications that benefit from bucketing
----------------------------------------
Bucketing refers to compiling your model multiple times with different target
input shapes to create “bucketed models." :ref:`creating_buckets` provides an
overview on selecting the input shapes that you use to create bucketed models. At
inference time, each input is padded until its shape matches the next largest
bucket shape. The padded input is then passed into the corresponding bucketed model
for inference. By compiling the same model with multiple different input shapes,
the amount of input padding is reduced compared to padding every input to the
maximum size in your dataset. This technique minimizes the compute overhead
and improves inference performance compared to padding every image to the
maximum shape in your dataset.
Bucketing works best when multiple different bucketed models are created to efficiently
cover the full range of input shapes. You can fine-tune the model performance
by experimenting with different bucket sizes that correspond to the
distribution of input shapes in your dataset.
Bucketing can only be used if there is an upper bound on the shape of the
inputs. If necessary, an upper bound on the input shape can be enforced using
resizing and other forms of preprocessing.
.. _num_buckets:
The upper bound on the number of bucketed models that you use is dictated by the
total size of the compiled bucketed models. Each Inferentia chip has 8GB of
DRAM, or 2GB of DRAM per NeuronCore. An inf1.xlarge and inf1.2xlarge have
1 Inferentia chip, an inf1.6xlarge has 4 Inferentia chips, and an inf1.24xlarge
has 16 Inferentia chips. Thus, you should limit the total size of all bucketed
models to around 8GB per Inferentia chip or 2GB per NeuronCore.
The following formula provides an approximation for the number of
compiled bucketed models you can fit on each NeuronCore:
::
number-of-buckets = round(10^9 / number-of-weights-in-model)
We recommend using :ref:`neuron-top <neuron-top-ug>` to monitor the
memory usage on your inf1 instance as you load multiple bucketed models.
Implementing bucketing
-----------------------
Implementing bucketing consists of two main parts: creating multiple bucketed
models at compile-time and running inference using the bucketed models on (padded)
inputs. The following sections describe how to implement bucketing to run
inference in applications that have variable input shapes.
.. _creating_buckets:
Creating bucketed models
^^^^^^^^^^^^^^^^^^^^^^^^^
Before running inference, models should be compiled for different input shapes
that are representative of the input dataset. The input shapes that are used
to compile the models determine the bucket shapes that are used during inference.
The bucket shapes should be chosen to minimize the amount of padding on each new input.
Additionally, there should always be a bucket that’s large enough to handle the
maximum input shape in the dataset. The limit on the number of compiled bucketed
models that can be used is described in this :ref:`section<num_buckets>`.
Running inference with bucketing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
At inference time, each input should be padded to match the size of the next
largest bucket, such that the height and width (or sequence length) of the
padded input equals the size of the bucket. Then, the padded input should
be passed into the corresponding bucket for inference. If necessary, it’s
important to remove and/or crop any aberrant predictions that occur in the
padded region. For example, in object detection applications, bounding box
predictions that occur in the padded regions should be removed to avoid
erroneous predictions.
.. _bucketing_examples:
Examples
--------
The following sections provide examples of applying the bucketing technique
to run inference in applications that have variable input shapes.
.. _bucketing_example_cv:
Computer vision bucketing
^^^^^^^^^^^^^^^^^^^^^^^^^^
As an example of implementing bucketing for computer vision models, consider an
application where the height and width of images in dataset are uniformly
distributed between `[400, 400]` and `[800, 800]`. Given that every input
shape between `[400, 400]` and `[800, 800]` is equally likely, it could
make sense to create bucketed models that divide up the range of input shapes into
equally sized chunks. For example, we could create bucketed models for the input shapes
`[500, 500]`, `[600, 600]`, `[700, 700]`, and `[800, 800]`.
As an example of running inference with bucketing, let’s assume that we created
bucketed models for the input shapes `[500, 500]`, `[600, 600]`, `[700, 700]`, and
`[800, 800]`. If we receive an input with shape `[640, 640]`, we would
pad the input to the next largest bucket, `[700, 700]`, and use this bucket
for inference. If we receive an input with shape `[440, 540]`, we would
need to pad the input to the bucket size, `[600, 600]`, and use this bucket
for inference.
As another example of creating bucketed models, consider a computer vision
application where the dataset is not uniformly distributed. As before, let’s
assume the input shapes range between `[400, 400]` to `[800, 800]`. Now, let’s
assume the data shape distribution is bimodal, such that `[540, 540]` and
`[720, 720]` are the two most common input shapes. In this example, it might
make sense to create bucketed models for input shapes `[540, 540]`, `[720, 720]`, and
`[800, 800]` to target the most common shapes while still including the
entire range of input shapes.
End-to-end computer vision bucketing example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In this example, we run inference in a computer vision application that has
variable shaped images that range in shape from `[400, 400]` to
`[800, 800]`. We create bucketed models for the input shapes `[500, 500]`,
`[600, 600]`, `[700, 700]`, and `[800, 800]` to handle the variable input
shapes.
.. code-block:: python
import numpy as np
import torch
from torchvision import models
import torch_neuron
# Load the model and set it to evaluation mode
model = models.resnet50(pretrained=True)
model.eval()
# Define the bucket sizes that will be used for compilation and inference
bucket_sizes = [(500, 500), (600, 600), (700, 700), (800, 800)]
# Create the bucketed models by compiling a model for each bucket size
buckets = {}
for bucket_size in bucket_sizes:
# Create an example input that is the desired bucket size
h, w = bucket_size
image = torch.rand([1, 3, h, w])
# Compile with the example input to create the bucketed model
model_neuron = torch.neuron.trace(model, image)
# Run a warm up inference to load the model into Inferentia memory
model_neuron(image)
# Add the bucketed model based on its bucket size
buckets[bucket_size] = model_neuron
def get_bucket_and_pad_image(image):
# Determine which bucket size to use
oh, ow = image.shape[-2:]
target_bucket = None
for bucket_size in bucket_sizes:
# Choose a bucket that's larger in both the height and width dimensions
if oh <= bucket_size[0] and ow <= bucket_size[1]:
target_bucket = bucket_size
break
# Pad the image to match the size of the bucket
h_delta = target_bucket[0] - oh
w_delta = target_bucket[1] - ow
b_pad = h_delta # Bottom padding
l_pad = 0 # Left padding
t_pad = 0 # Top padding
r_pad = w_delta # Right padding
# Pad the height and width of the image
padding_amounts = (l_pad, r_pad, t_pad, b_pad)
image_padded = torch.nn.functional.pad(image, padding_amounts, value=0)
return image_padded, target_bucket
# Run inference on inputs with different shapes
for _ in range(10):
# Create an image with a random height and width in range [400, 400] to [800, 800]
h = int(np.random.uniform(low=400, high=800))
w = int(np.random.uniform(low=400, high=800))
image = torch.rand(1, 3, h, w)
# Determine bucket and pad the image
image_padded, target_bucket = get_bucket_and_pad_image(image)
# Use the corresponding bucket to run inference
output = buckets[target_bucket](image_padded)
.. _bucketing_example_nlp:
Natural language processing bucketing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As an example of implementing bucketing for natural language processing models,
consider an application where the lengths of tokenized sequences in a dataset are
uniformly distributed between 0 and 128 tokens. Given that every tokenized sequence
length between 0 and 128 is equally likely, it might make sense to create
bucketed models that divide up the range of tokenized sequence lengths into equally sized
chunks. For example, we could create bucketed models for tokenized sequence lengths 64
and 128.
As an example of running inference with bucketing, let's assume that we created
bucketed models for the input tokenized sequence lengths 64 and 128. If we receive a
tokenized sequence with length 55, we would need to pad it to the bucket size
64 and use this bucket for inference. If we receive a tokenized sequence with
length 112, we would need to pad it to the bucket size 128 and use this bucket
for inference.
End-to-end natural language processing bucketing example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In this example, we run inference in a natural language processing application
that has variable length tokenized sequences that range from 0 to 128. We
create bucketed models for lengths 64 and 128 to handle the variable input lengths.
.. code-block:: python
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch_neuron
# Build tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=False)
model.eval()
# Define the bucket sizes that will be used for compilation and inference
bucket_sizes = [64, 128]
# Create the bucketed models by compiling a model for each bucket size
buckets = {}
for bucket_size in bucket_sizes:
# Setup some example inputs
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "HuggingFace's headquarters are situated in Manhattan"
# Create an example input that is the desired bucket size
paraphrase = tokenizer.encode_plus(sequence_0,
sequence_1,
max_length=bucket_size,
padding='max_length',
truncation=True,
return_tensors="pt")
# Convert example inputs to a format that is compatible with TorchScript tracing
example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']
# Compile with the example input to create the bucketed model
model_neuron = torch.neuron.trace(model, example_inputs_paraphrase)
# Run a warm up inference to load the model into Inferentia memory
model_neuron(*example_inputs_paraphrase)
# Add the bucketed model based on its bucket size
buckets[bucket_size] = model_neuron
def get_bucket_and_pad_paraphrase(paraphrase):
# Determine which bucket size to use
inputs = paraphrase['input_ids']
attention = paraphrase['attention_mask']
token_type = paraphrase['token_type_ids']
paraphrase_len = inputs.shape[1]
target_bucket = None
for bucket_size in bucket_sizes:
if paraphrase_len <= bucket_size:
target_bucket = bucket_size
break
# Pad the paraphrase to match the size of the bucket
delta = target_bucket - paraphrase_len
zeros = torch.zeros([1, delta], dtype=torch.long)
inputs = torch.cat([inputs, zeros], dim=1)
attention = torch.cat([attention, zeros], dim=1)
token_type = torch.cat([token_type, zeros], dim=1)
paraphrase_padded = inputs, attention, token_type
return paraphrase_padded, target_bucket
# Create two sample sequences
sequence_0 = ("The only other bear similar in size to the polar bear is the "
"Kodiak bear, which is a subspecies of the brown bear. Adult male "
"polar bears weigh 350–700 kg and measure 2.4–3 meters in total "
"length. All bears are short-tailed, the polar bear's tail is "
"relatively the shortest amongst living bears.")
sequence_1 = ("Around the Beaufort Sea, however, mature males reportedly "
"average 450 kg. Adult females are roughly half the size of males "
"and normally weigh 150–250 kg, measuring 1.8–2.4 meters in length. "
"The legs are stocky and the ears and tail are small.")
# Run inference on inputs with different shapes
# We create the variable shapes by randomly cropping the sequences
for _ in range(10):
# Get random sequence lengths between 0 and 128
paraphrase_len = int(np.random.uniform(128))
# Crop the paraphrase
paraphrase_cropped = tokenizer.encode_plus(sequence_0,
sequence_1,
max_length=paraphrase_len,
padding='max_length',
truncation=True,
return_tensors="pt")
# Determine bucket and pad the paraphrase
paraphrase_padded, target_bucket = get_bucket_and_pad_paraphrase(paraphrase_cropped)
# Use the corresponding bucket to run inference
output = buckets[target_bucket](*paraphrase_padded)
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _bucketing_app_note:
Running inference on variable input shapes with bucketing
=========================================================
.. contents:: Table of contents
:local:
:depth: 2
Introduction
------------
With Inferentia, the shape of every input must be fixed at compile time. For
applications that require multiple input sizes, we recommend using padding or
bucketing techniques. Padding requires you to compile your model with the
largest expected input size and pad every input to this maximum size. If the
performance of your model using padding is not within your targets, you can
consider implementing bucketing.
This guide introduces bucketing, a technique to run inference on inputs with
variable shapes on Inferentia. The following sections explain how bucketing can
improve the performance of inference workloads on Inferentia. It covers an
overview of how bucketing works and provides examples of using bucketing in
:ref:`computer vision <bucketing_example_cv>` and
:ref:`natural language processing<bucketing_example_nlp>` applications.
Applications that benefit from bucketing
----------------------------------------
Bucketing refers to compiling your model multiple times with different target
input shapes to create “bucketed models." :ref:`creating_buckets` provides an
overview on selecting the input shapes that you use to create bucketed models. At
inference time, each input is padded until its shape matches the next largest
bucket shape. The padded input is then passed into the corresponding bucketed model
for inference. By compiling the same model with multiple different input shapes,
the amount of input padding is reduced compared to padding every input to the
maximum size in your dataset. This technique minimizes the compute overhead
and improves inference performance compared to padding every image to the
maximum shape in your dataset.
Bucketing works best when multiple different bucketed models are created to efficiently
cover the full range of input shapes. You can fine-tune the model performance
by experimenting with different bucket sizes that correspond to the
distribution of input shapes in your dataset.
Bucketing can only be used if there is an upper bound on the shape of the
inputs. If necessary, an upper bound on the input shape can be enforced using
resizing and other forms of preprocessing.
.. _num_buckets:
The upper bound on the number of bucketed models that you use is dictated by the
total size of the compiled bucketed models. Each Inferentia chip has 8GB of
DRAM, or 2GB of DRAM per NeuronCore. An inf1.xlarge and inf1.2xlarge have
1 Inferentia chip, an inf1.6xlarge has 4 Inferentia chips, and an inf1.24xlarge
has 16 Inferentia chips. Thus, you should limit the total size of all bucketed
models to around 8GB per Inferentia chip or 2GB per NeuronCore.
The following formula provides an approximation for the number of
compiled bucketed models you can fit on each NeuronCore:
::
number-of-buckets = round(10^9 / number-of-weights-in-model)
We recommend using :ref:`neuron-top <neuron-top-ug>` to monitor the
memory usage on your inf1 instance as you load multiple bucketed models.
Implementing bucketing
-----------------------
Implementing bucketing consists of two main parts: creating multiple bucketed
models at compile-time and running inference using the bucketed models on (padded)
inputs. The following sections describe how to implement bucketing to run
inference in applications that have variable input shapes.
.. _creating_buckets:
Creating bucketed models
^^^^^^^^^^^^^^^^^^^^^^^^^
Before running inference, models should be compiled for different input shapes
that are representative of the input dataset. The input shapes that are used
to compile the models determine the bucket shapes that are used during inference.
The bucket shapes should be chosen to minimize the amount of padding on each new input.
Additionally, there should always be a bucket that’s large enough to handle the
maximum input shape in the dataset. The limit on the number of compiled bucketed
models that can be used is described in this :ref:`section<num_buckets>`.
Running inference with bucketing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
At inference time, each input should be padded to match the size of the next
largest bucket, such that the height and width (or sequence length) of the
padded input equals the size of the bucket. Then, the padded input should
be passed into the corresponding bucket for inference. If necessary, it’s
important to remove and/or crop any aberrant predictions that occur in the
padded region. For example, in object detection applications, bounding box
predictions that occur in the padded regions should be removed to avoid
erroneous predictions.
.. _bucketing_examples:
Examples
--------
The following sections provide examples of applying the bucketing technique
to run inference in applications that have variable input shapes.
.. _bucketing_example_cv:
Computer vision bucketing
^^^^^^^^^^^^^^^^^^^^^^^^^^
As an example of implementing bucketing for computer vision models, consider an
application where the height and width of images in dataset are uniformly
distributed between `[400, 400]` and `[800, 800]`. Given that every input
shape between `[400, 400]` and `[800, 800]` is equally likely, it could
make sense to create bucketed models that divide up the range of input shapes into
equally sized chunks. For example, we could create bucketed models for the input shapes
`[500, 500]`, `[600, 600]`, `[700, 700]`, and `[800, 800]`.
As an example of running inference with bucketing, let’s assume that we created
bucketed models for the input shapes `[500, 500]`, `[600, 600]`, `[700, 700]`, and
`[800, 800]`. If we receive an input with shape `[640, 640]`, we would
pad the input to the next largest bucket, `[700, 700]`, and use this bucket
for inference. If we receive an input with shape `[440, 540]`, we would
need to pad the input to the bucket size, `[600, 600]`, and use this bucket
for inference.
As another example of creating bucketed models, consider a computer vision
application where the dataset is not uniformly distributed. As before, let’s
assume the input shapes range between `[400, 400]` to `[800, 800]`. Now, let’s
assume the data shape distribution is bimodal, such that `[540, 540]` and
`[720, 720]` are the two most common input shapes. In this example, it might
make sense to create bucketed models for input shapes `[540, 540]`, `[720, 720]`, and
`[800, 800]` to target the most common shapes while still including the
entire range of input shapes.
End-to-end computer vision bucketing example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In this example, we run inference in a computer vision application that has
variable shaped images that range in shape from `[400, 400]` to
`[800, 800]`. We create bucketed models for the input shapes `[500, 500]`,
`[600, 600]`, `[700, 700]`, and `[800, 800]` to handle the variable input
shapes.
.. code-block:: python
import numpy as np
import torch
from torchvision import models
import torch_neuron
# Load the model and set it to evaluation mode
model = models.resnet50(pretrained=True)
model.eval()
# Define the bucket sizes that will be used for compilation and inference
bucket_sizes = [(500, 500), (600, 600), (700, 700), (800, 800)]
# Create the bucketed models by compiling a model for each bucket size
buckets = {}
for bucket_size in bucket_sizes:
# Create an example input that is the desired bucket size
h, w = bucket_size
image = torch.rand([1, 3, h, w])
# Compile with the example input to create the bucketed model
model_neuron = torch.neuron.trace(model, image)
# Run a warm up inference to load the model into Inferentia memory
model_neuron(image)
# Add the bucketed model based on its bucket size
buckets[bucket_size] = model_neuron
def get_bucket_and_pad_image(image):
# Determine which bucket size to use
oh, ow = image.shape[-2:]
target_bucket = None
for bucket_size in bucket_sizes:
# Choose a bucket that's larger in both the height and width dimensions
if oh <= bucket_size[0] and ow <= bucket_size[1]:
target_bucket = bucket_size
break
# Pad the image to match the size of the bucket
h_delta = target_bucket[0] - oh
w_delta = target_bucket[1] - ow
b_pad = h_delta # Bottom padding
l_pad = 0 # Left padding
t_pad = 0 # Top padding
r_pad = w_delta # Right padding
# Pad the height and width of the image
padding_amounts = (l_pad, r_pad, t_pad, b_pad)
image_padded = torch.nn.functional.pad(image, padding_amounts, value=0)
return image_padded, target_bucket
# Run inference on inputs with different shapes
for _ in range(10):
# Create an image with a random height and width in range [400, 400] to [800, 800]
h = int(np.random.uniform(low=400, high=800))
w = int(np.random.uniform(low=400, high=800))
image = torch.rand(1, 3, h, w)
# Determine bucket and pad the image
image_padded, target_bucket = get_bucket_and_pad_image(image)
# Use the corresponding bucket to run inference
output = buckets[target_bucket](image_padded)
.. _bucketing_example_nlp:
Natural language processing bucketing
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As an example of implementing bucketing for natural language processing models,
consider an application where the lengths of tokenized sequences in a dataset are
uniformly distributed between 0 and 128 tokens. Given that every tokenized sequence
length between 0 and 128 is equally likely, it might make sense to create
bucketed models that divide up the range of tokenized sequence lengths into equally sized
chunks. For example, we could create bucketed models for tokenized sequence lengths 64
and 128.
As an example of running inference with bucketing, let's assume that we created
bucketed models for the input tokenized sequence lengths 64 and 128. If we receive a
tokenized sequence with length 55, we would need to pad it to the bucket size
64 and use this bucket for inference. If we receive a tokenized sequence with
length 112, we would need to pad it to the bucket size 128 and use this bucket
for inference.
End-to-end natural language processing bucketing example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In this example, we run inference in a natural language processing application
that has variable length tokenized sequences that range from 0 to 128. We
create bucketed models for lengths 64 and 128 to handle the variable input lengths.
.. code-block:: python
import numpy as np
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch_neuron
# Build tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc", return_dict=False)
model.eval()
# Define the bucket sizes that will be used for compilation and inference
bucket_sizes = [64, 128]
# Create the bucketed models by compiling a model for each bucket size
buckets = {}
for bucket_size in bucket_sizes:
# Setup some example inputs
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "HuggingFace's headquarters are situated in Manhattan"
# Create an example input that is the desired bucket size
paraphrase = tokenizer.encode_plus(sequence_0,
sequence_1,
max_length=bucket_size,
padding='max_length',
truncation=True,
return_tensors="pt")
# Convert example inputs to a format that is compatible with TorchScript tracing
example_inputs_paraphrase = paraphrase['input_ids'], paraphrase['attention_mask'], paraphrase['token_type_ids']
# Compile with the example input to create the bucketed model
model_neuron = torch.neuron.trace(model, example_inputs_paraphrase)
# Run a warm up inference to load the model into Inferentia memory
model_neuron(*example_inputs_paraphrase)
# Add the bucketed model based on its bucket size
buckets[bucket_size] = model_neuron
def get_bucket_and_pad_paraphrase(paraphrase):
# Determine which bucket size to use
inputs = paraphrase['input_ids']
attention = paraphrase['attention_mask']
token_type = paraphrase['token_type_ids']
paraphrase_len = inputs.shape[1]
target_bucket = None
for bucket_size in bucket_sizes:
if paraphrase_len <= bucket_size:
target_bucket = bucket_size
break
# Pad the paraphrase to match the size of the bucket
delta = target_bucket - paraphrase_len
zeros = torch.zeros([1, delta], dtype=torch.long)
inputs = torch.cat([inputs, zeros], dim=1)
attention = torch.cat([attention, zeros], dim=1)
token_type = torch.cat([token_type, zeros], dim=1)
paraphrase_padded = inputs, attention, token_type
return paraphrase_padded, target_bucket
# Create two sample sequences
sequence_0 = ("The only other bear similar in size to the polar bear is the "
"Kodiak bear, which is a subspecies of the brown bear. Adult male "
"polar bears weigh 350–700 kg and measure 2.4–3 meters in total "
"length. All bears are short-tailed, the polar bear's tail is "
"relatively the shortest amongst living bears.")
sequence_1 = ("Around the Beaufort Sea, however, mature males reportedly "
"average 450 kg. Adult females are roughly half the size of males "
"and normally weigh 150–250 kg, measuring 1.8–2.4 meters in length. "
"The legs are stocky and the ears and tail are small.")
# Run inference on inputs with different shapes
# We create the variable shapes by randomly cropping the sequences
for _ in range(10):
# Get random sequence lengths between 0 and 128
paraphrase_len = int(np.random.uniform(128))
# Crop the paraphrase
paraphrase_cropped = tokenizer.encode_plus(sequence_0,
sequence_1,
max_length=paraphrase_len,
padding='max_length',
truncation=True,
return_tensors="pt")
# Determine bucket and pad the paraphrase
paraphrase_padded, target_bucket = get_bucket_and_pad_paraphrase(paraphrase_cropped)
# Use the corresponding bucket to run inference
output = buckets[target_bucket](*paraphrase_padded)
</pre></body></html>
|
2023-09-29T20:54:47.354Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.rst.txt
|
```
.. _torch-hf-bert-finetune:
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
=================================================================================================
In this tutorial, we show how to run a Hugging Face script that uses Hugging Face Trainer API
to do fine-tuning on Trainium. The example follows the `text-classification
example <https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification>`__
which fine-tunes BERT-base model for sequence classification on the GLUE
benchmark.
.. contents:: Table of Contents
:local:
:depth: 2
.. include:: ../note-performance.txt
Setup and compilation
---------------------
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on
Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you also want to run through the BERT pretraining and GPT pretraining tutorials.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
First we install a recent version of HF transformers, scikit-learn and evaluate packages in our environment as well as download the source matching the installed version. In this example, we use the text classification example from HF transformers source:
.. code:: bash
export HF_VER=4.27.4
pip install -U transformers==$HF_VER datasets evaluate scikit-learn
cd ~/
git clone https://github.com/huggingface/transformers --branch v$HF_VER
cd ~/transformers/examples/pytorch/text-classification
Single-worker training
----------------------
We will run MRPC task fine-tuning following the example in README.md located in the path ``~/transformers/examples/pytorch/text-classification``. In this part of the tutorial we will use the Hugging Face model hub's pretrained ``bert-large-uncased`` model.
.. note::
If you are using older versions of transformers <4.27.0 or PyTorch Neuron <1.13.0, please see section :ref:`workarounds_for_older_versions` for necessary workarounds.
We use full BF16 casting using XLA_USE_BF16=1 and compiler flag ``--model-type=transformer`` to enable best performance.
First, paste the following script into your terminal to create a “run.sh” file and change it to executable:
.. code:: bash
tee run.sh > /dev/null <<EOF
#!/usr/bin/env bash
export TASK_NAME=mrpc
export NEURON_CC_FLAGS="--model-type=transformer"
XLA_USE_BF16=1 python3 ./run_glue.py \\
--model_name_or_path bert-large-uncased \\
--task_name \$TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 128 \\
--per_device_train_batch_size 8 \\
--learning_rate 2e-5 \\
--num_train_epochs 5 \\
--overwrite_output_dir \\
--output_dir /tmp/\$TASK_NAME/ |& tee log_run
EOF
chmod +x run.sh
We optionally precompile the model and training script using neuron_parallel_compile to warm up the persistent
graph cache (Neuron Cache) such that the actual run has fewer compilations (faster run
time):
.. code:: bash
neuron_parallel_compile ./run.sh
Please ignore the results from this precompile run as it is only for
extracting and compiling the XLA graphs.
.. note::
With both train and evaluation options (``--do_train`` and ``--do_eval``), you will encounter harmless error
``ValueError: Target is multiclass but average='binary'`` when using neuron_parallel_compile.
Precompilation is optional and only needed to be done once unless hyperparameters such as batch size are modified.
After the optional precompilation, the actual run will be faster with minimal
additional compilations.
.. code:: bash
./run.sh
If precompilation was not done, the first execution of ./run.sh will be slower due to serial compilations. Rerunning the same script a second time would show quicker execution as the compiled graphs will be already cached in persistent cache.
.. _multi_worker_training:
Multi-worker training
---------------------
The above script would run one worker on one NeuronCore. To run on
multiple cores, launch the ``run_glue.py`` script with ``torchrun`` using ``--nproc_per_node=N`` option to specify the number of workers
(N=2 for trn1.2xlarge, and N=2, 8, or 32 for trn1.32xlarge).
.. note::
If you are using older versions of transformers <4.27.0 or PyTorch Neuron <1.13.0, please see section :ref:`workarounds_for_older_versions` for necessary workarounds.
The following example runs 2 workers.
Paste the following script into your terminal to create a “run_2w.sh” file and change it to executable:
.. code:: bash
tee run_2w.sh > /dev/null <<EOF
#!/usr/bin/env bash
export TASK_NAME=mrpc
export NEURON_CC_FLAGS="--model-type=transformer"
XLA_USE_BF16=1 torchrun --nproc_per_node=2 ./run_glue.py \\
--model_name_or_path bert-large-uncased \\
--task_name \$TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 128 \\
--per_device_train_batch_size 8 \\
--learning_rate 2e-5 \\
--num_train_epochs 5 \\
--overwrite_output_dir \\
--output_dir /tmp/\$TASK_NAME/ |& tee log_run_2w
EOF
chmod +x run_2w.sh
Again, we optionally precompile the model and training script using neuron_parallel_compile to warm up the persistent
graph cache (Neuron Cache), ignoring the results from this precompile run as it is only for
extracting and compiling the XLA graphs:
.. code:: bash
neuron_parallel_compile ./run_2w.sh
Precompilation is optional and only needed to be done once unless hyperparameters such as batch size are modified.
After the optional precompilation, the actual run will be faster with minimal
additional compilations.
.. code:: bash
./run_2w.sh
During run, you will now notice that the "Total train batch size" is now 16 and the "Total optimization steps" is now half the number for one worker training.
Converting BERT pretrained checkpoint to Hugging Face pretrained model format
-----------------------------------------------------------------------------
If you have a pretrained checkpoint (i.e., from the BERT phase 2 pretraining tutorial), you can run the script below (saved as "convert.py") to convert BERT pretrained saved checkpoint to Hugging Face pretrained model format. An example phase 2 pretrained checkpoint can be downloaded from ``s3://neuron-s3/training_checkpoints/pytorch/dp_bert_large_hf_pretrain/ckpt_29688.pt``. Note that here we also use the ``bert-large-uncased`` model configuration to match the BERT-Large model trained following BERT phase 2 pretraining tutorial.
.. code:: python
import os
import sys
import argparse
import torch
import transformers
from transformers import (
BertForPreTraining,
)
import torch_xla.core.xla_model as xm
from transformers.utils import check_min_version
from transformers.utils.versions import require_version
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--model_name', type=str, default='bert-large-uncased', help="Path to model identifier from huggingface.co/models")
parser.add_argument('--output_saved_model_path', type=str, default='./hf_saved_model', help="Directory to save the HF pretrained model format.")
parser.add_argument('--checkpoint_path', type=str, required=True, help="Path to pretrained checkpoint which needs to be converted to a HF pretrained model format")
args = parser.parse_args(sys.argv[1:])
model = BertForPreTraining.from_pretrained(args.model_name)
check_point = torch.load(args.checkpoint_path, map_location='cpu')
model.load_state_dict(check_point['model'], strict=False)
model.save_pretrained(args.output_saved_model_path, save_config=True, save_function=xm.save)
print("Done converting checkpoint {} to HuggingFace saved model in directory {}.".format(args.checkpoint_path, args.output_saved_model_path))
Run the conversion script as:
.. code:: bash
python convert.py --checkpoint_path ckpt_29688.pt
After conversion, the new Hugging Face pretrained model is stored in the output directory specified by the ``--output_saved_model_path`` option which is ``hf_saved_model`` by default. You will use this directory in the next step.
Paste the following script into your terminal to create a “run_converted.sh” file and change it to executable:
(note that it uses the converted Hugging Face pretrained model in ``hf_saved_model`` directory):
.. code:: bash
tee run_converted.sh > /dev/null <<EOF
#!/usr/bin/env bash
export TASK_NAME=mrpc
export NEURON_CC_FLAGS="--model-type=transformer"
XLA_USE_BF16=1 python3 ./run_glue.py \\
--model_name_or_path hf_saved_model \\
--tokenizer_name bert-large-uncased \\
--task_name \$TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 128 \\
--per_device_train_batch_size 8 \\
--learning_rate 2e-5 \\
--num_train_epochs 5 \\
--overwrite_output_dir \\
--output_dir /tmp/\$TASK_NAME/ |& tee log_run_converted
EOF
chmod +x run_converted.sh
If it is the first time running with ``bert-large-uncased`` model or if hyperparameters have changed, then the optional one-time precompilation step can save compilation time:
.. code:: bash
neuron_parallel_compile ./run_converted.sh
If you have run the single worker training in a previous section, then you can skip the precompilation step and just do:
.. code:: bash
./run_converted.sh
.. _workarounds_for_older_versions:
Older versions of transformers <4.27.0 or PyTorch Neuron <1.13.0
----------------------------------------------------------------
If using older versions of transformers package before 4.27.0 or PyTorch Neuron before 1.13.0, please edit the python script run_glue.py and add the following lines after the Python
imports. They set the compiler flag for transformer model type and enable data parallel training using torchrun:
.. code:: python
# Enable torchrun
import os
import torch
import torch_xla.distributed.xla_backend
from packaging import version
from transformers import __version__, Trainer
if version.parse(__version__) < version.parse("4.26.0") and os.environ.get("WORLD_SIZE"):
torch.distributed.init_process_group('xla')
# Disable DDP for torchrun
import contextlib
if version.parse(__version__) < version.parse("4.20.0"):
def _wrap_model(self, model, training=True):
model.no_sync = lambda: contextlib.nullcontext()
return model
else:
def _wrap_model(self, model, training=True, dataloader=None):
model.no_sync = lambda: contextlib.nullcontext()
return model
Trainer._wrap_model = _wrap_model
# Workaround for NaNs seen with transformers version >= 4.21.0
# https://github.com/aws-neuron/aws-neuron-sdk/issues/593
import transformers
if os.environ.get("XLA_USE_BF16") or os.environ.get("XLA_DOWNCAST_BF16"):
transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16
.. _known_issues:
Known issues and limitations
----------------------------
The following are currently known issues:
- Long compilation times: this can be alleviated with
``neuron_parallel_compile`` tool to extract graphs from a short trial run and
compile them in parallel ahead of the actual run, as shown above.
- When precompiling using batch size of 16 on trn1.2xlarge, you will see ``ERROR ||PARALLEL_COMPILE||: parallel compilation with neuronx-cc exited with error.Received error code: -9``. To workaround this error, please set NEURON_PARALLEL_COMPILE_MAX_RETRIES=1 in the environment.
- With release 2.6 and transformers==4.25.1,
using ``neuron_parallel_compile`` tool to run ``run_glue.py`` script
with both train and evaluation options (``--do_train`` and ``--do_eval``), you will encounter harmless error
``ValueError: Target is multiclass but average='binary'``
- Reduced accuracy for RoBerta-Large is seen with Neuron PyTorch 1.12 (release 2.6) in FP32 mode with compiler BF16 autocast.
The workaround is to set NEURON_CC_FLAGS="--auto-cast none" or set NEURON_RT_STOCHASTIC_ROUNDING_EN=1.
- When using DDP in PT 1.13, compilation of one graph will fail with "Killed" error message for ``bert-large-uncased``. For ``bert-base-cased``, the final MRPC evaluation accuracy is 31% which is lower than expected. These issues are being investigated and will be fixed in an upcoming release. For now, DDP is disabled with the workaround shown above in :ref:`multi_worker_training`.
- When using DDP in PT 1.13 with neuron_parallel_compile precompilation, you will hit an error ``Rank 1 has 393 params, while rank 0 has inconsistent 0 params.``. To workaround this error, add the follow code snippet at the top of ``run_glue.py`` to skip the problematic shape verification code during precompilation:
.. code:: python
import os
if os.environ.get("NEURON_EXTRACT_GRAPHS_ONLY", None):
import torch.distributed as dist
_verify_param_shape_across_processes = lambda process_group, tensors, logger=None: True
- Variable input sizes: When fine-tune models such as dslim/bert-base-NER using the `token-classification example <https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification>`__, you may encounter timeouts (lots of "socket.h:524 CCOM WARN Timeout waiting for RX" messages) and execution hang. This occurs because NER dataset has different sample sizes, which causes many recompilations and compiled graph (NEFF) reloads. Furthermore, different data parallel workers can execute different compiled graph. This multiple-program multiple-data behavior is currently unsupported. To workaround this issue, please pad to maximum length using the Trainer API option ``--pad_to_max_length``.
- When running HuggingFace GPT fine-tuning with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you might see NaNs in the loss immediately at the first step. This issue occurs due to large negative constants used to implement attention masking (https://github.com/huggingface/transformers/pull/17306). To workaround this issue, please use transformers version <= 4.20.0.
- When using Trainer API option --bf16, you will see "RuntimeError: No CUDA GPUs are available". To workaround this error, please add "import torch; torch.cuda.is_bf16_supported = lambda: True" to the Python script (i.e. run_glue.py). (Trainer API option --fp16 is not yet supported).
The following are resolved issues:
- Using ``neuron_parallel_compile`` tool to run ``run_glue.py`` script
with both train and evaluation options (``--do_train`` and ``--do_eval``), you will
encounter INVALID_ARGUMENT error. To avoid this, only enable train for parallel
compile (``--do_train``). This will cause compilations during evaluation step.
The INVALID_ARGUMENT error is fixed in release 2.6 together with latest transformers package version 4.25.1.
- When running HuggingFace BERT (any size) fine-tuning tutorial or pretraining tutorial with transformers version >= 4.21.0 and < 4.25.1 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you will see NaNs in the loss immediately at the first step. More details on the issue can be found at `pytorch/xla#4152 <https://github.com/pytorch/xla/issues/4152>`_. The workaround is to use transformers version < 4.21.0 or >= 4.25.1, or add ``transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16`` to your Python script (i.e. run_glue.py).
- Some recompilation is seen at the epoch boundary even after ``neuron_parallel_compile`` is used. This can be fixed by using the same number of epochs both during precompilation and the actual run.
- When running multi-worker training, you may see the process getting killed at the time of model saving on trn1.2xlarge.
This happens because the transformers ``trainer.save_model`` api uses ``xm.save`` for saving models.
This api is known to cause high host memory usage in multi-worker setting `see Saving and Loading XLA Tensors in <https://github.com/pytorch/xla/blob/master/API_GUIDE.md>`__ . Coupled with a compilation
at the same time results in a host OOM. To avoid this issue, we can: Precompile all the graphs in multi-worker
training. This can be done by running the multi-worker training first with ``neuron_parallel_compile <script>``
followed by the actual training. This would avoid the compilation at model save during actual training.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-hf-bert-finetune:
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
=================================================================================================
In this tutorial, we show how to run a Hugging Face script that uses Hugging Face Trainer API
to do fine-tuning on Trainium. The example follows the `text-classification
example <https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification>`__
which fine-tunes BERT-base model for sequence classification on the GLUE
benchmark.
.. contents:: Table of Contents
:local:
:depth: 2
.. include:: ../note-performance.txt
Setup and compilation
---------------------
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on
Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you also want to run through the BERT pretraining and GPT pretraining tutorials.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
First we install a recent version of HF transformers, scikit-learn and evaluate packages in our environment as well as download the source matching the installed version. In this example, we use the text classification example from HF transformers source:
.. code:: bash
export HF_VER=4.27.4
pip install -U transformers==$HF_VER datasets evaluate scikit-learn
cd ~/
git clone https://github.com/huggingface/transformers --branch v$HF_VER
cd ~/transformers/examples/pytorch/text-classification
Single-worker training
----------------------
We will run MRPC task fine-tuning following the example in README.md located in the path ``~/transformers/examples/pytorch/text-classification``. In this part of the tutorial we will use the Hugging Face model hub's pretrained ``bert-large-uncased`` model.
.. note::
If you are using older versions of transformers <4.27.0 or PyTorch Neuron <1.13.0, please see section :ref:`workarounds_for_older_versions` for necessary workarounds.
We use full BF16 casting using XLA_USE_BF16=1 and compiler flag ``--model-type=transformer`` to enable best performance.
First, paste the following script into your terminal to create a “run.sh” file and change it to executable:
.. code:: bash
tee run.sh > /dev/null <<EOF
#!/usr/bin/env bash
export TASK_NAME=mrpc
export NEURON_CC_FLAGS="--model-type=transformer"
XLA_USE_BF16=1 python3 ./run_glue.py \\
--model_name_or_path bert-large-uncased \\
--task_name \$TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 128 \\
--per_device_train_batch_size 8 \\
--learning_rate 2e-5 \\
--num_train_epochs 5 \\
--overwrite_output_dir \\
--output_dir /tmp/\$TASK_NAME/ |& tee log_run
EOF
chmod +x run.sh
We optionally precompile the model and training script using neuron_parallel_compile to warm up the persistent
graph cache (Neuron Cache) such that the actual run has fewer compilations (faster run
time):
.. code:: bash
neuron_parallel_compile ./run.sh
Please ignore the results from this precompile run as it is only for
extracting and compiling the XLA graphs.
.. note::
With both train and evaluation options (``--do_train`` and ``--do_eval``), you will encounter harmless error
``ValueError: Target is multiclass but average='binary'`` when using neuron_parallel_compile.
Precompilation is optional and only needed to be done once unless hyperparameters such as batch size are modified.
After the optional precompilation, the actual run will be faster with minimal
additional compilations.
.. code:: bash
./run.sh
If precompilation was not done, the first execution of ./run.sh will be slower due to serial compilations. Rerunning the same script a second time would show quicker execution as the compiled graphs will be already cached in persistent cache.
.. _multi_worker_training:
Multi-worker training
---------------------
The above script would run one worker on one NeuronCore. To run on
multiple cores, launch the ``run_glue.py`` script with ``torchrun`` using ``--nproc_per_node=N`` option to specify the number of workers
(N=2 for trn1.2xlarge, and N=2, 8, or 32 for trn1.32xlarge).
.. note::
If you are using older versions of transformers <4.27.0 or PyTorch Neuron <1.13.0, please see section :ref:`workarounds_for_older_versions` for necessary workarounds.
The following example runs 2 workers.
Paste the following script into your terminal to create a “run_2w.sh” file and change it to executable:
.. code:: bash
tee run_2w.sh > /dev/null <<EOF
#!/usr/bin/env bash
export TASK_NAME=mrpc
export NEURON_CC_FLAGS="--model-type=transformer"
XLA_USE_BF16=1 torchrun --nproc_per_node=2 ./run_glue.py \\
--model_name_or_path bert-large-uncased \\
--task_name \$TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 128 \\
--per_device_train_batch_size 8 \\
--learning_rate 2e-5 \\
--num_train_epochs 5 \\
--overwrite_output_dir \\
--output_dir /tmp/\$TASK_NAME/ |& tee log_run_2w
EOF
chmod +x run_2w.sh
Again, we optionally precompile the model and training script using neuron_parallel_compile to warm up the persistent
graph cache (Neuron Cache), ignoring the results from this precompile run as it is only for
extracting and compiling the XLA graphs:
.. code:: bash
neuron_parallel_compile ./run_2w.sh
Precompilation is optional and only needed to be done once unless hyperparameters such as batch size are modified.
After the optional precompilation, the actual run will be faster with minimal
additional compilations.
.. code:: bash
./run_2w.sh
During run, you will now notice that the "Total train batch size" is now 16 and the "Total optimization steps" is now half the number for one worker training.
Converting BERT pretrained checkpoint to Hugging Face pretrained model format
-----------------------------------------------------------------------------
If you have a pretrained checkpoint (i.e., from the BERT phase 2 pretraining tutorial), you can run the script below (saved as "convert.py") to convert BERT pretrained saved checkpoint to Hugging Face pretrained model format. An example phase 2 pretrained checkpoint can be downloaded from ``s3://neuron-s3/training_checkpoints/pytorch/dp_bert_large_hf_pretrain/ckpt_29688.pt``. Note that here we also use the ``bert-large-uncased`` model configuration to match the BERT-Large model trained following BERT phase 2 pretraining tutorial.
.. code:: python
import os
import sys
import argparse
import torch
import transformers
from transformers import (
BertForPreTraining,
)
import torch_xla.core.xla_model as xm
from transformers.utils import check_min_version
from transformers.utils.versions import require_version
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--model_name', type=str, default='bert-large-uncased', help="Path to model identifier from huggingface.co/models")
parser.add_argument('--output_saved_model_path', type=str, default='./hf_saved_model', help="Directory to save the HF pretrained model format.")
parser.add_argument('--checkpoint_path', type=str, required=True, help="Path to pretrained checkpoint which needs to be converted to a HF pretrained model format")
args = parser.parse_args(sys.argv[1:])
model = BertForPreTraining.from_pretrained(args.model_name)
check_point = torch.load(args.checkpoint_path, map_location='cpu')
model.load_state_dict(check_point['model'], strict=False)
model.save_pretrained(args.output_saved_model_path, save_config=True, save_function=xm.save)
print("Done converting checkpoint {} to HuggingFace saved model in directory {}.".format(args.checkpoint_path, args.output_saved_model_path))
Run the conversion script as:
.. code:: bash
python convert.py --checkpoint_path ckpt_29688.pt
After conversion, the new Hugging Face pretrained model is stored in the output directory specified by the ``--output_saved_model_path`` option which is ``hf_saved_model`` by default. You will use this directory in the next step.
Paste the following script into your terminal to create a “run_converted.sh” file and change it to executable:
(note that it uses the converted Hugging Face pretrained model in ``hf_saved_model`` directory):
.. code:: bash
tee run_converted.sh > /dev/null <<EOF
#!/usr/bin/env bash
export TASK_NAME=mrpc
export NEURON_CC_FLAGS="--model-type=transformer"
XLA_USE_BF16=1 python3 ./run_glue.py \\
--model_name_or_path hf_saved_model \\
--tokenizer_name bert-large-uncased \\
--task_name \$TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 128 \\
--per_device_train_batch_size 8 \\
--learning_rate 2e-5 \\
--num_train_epochs 5 \\
--overwrite_output_dir \\
--output_dir /tmp/\$TASK_NAME/ |& tee log_run_converted
EOF
chmod +x run_converted.sh
If it is the first time running with ``bert-large-uncased`` model or if hyperparameters have changed, then the optional one-time precompilation step can save compilation time:
.. code:: bash
neuron_parallel_compile ./run_converted.sh
If you have run the single worker training in a previous section, then you can skip the precompilation step and just do:
.. code:: bash
./run_converted.sh
.. _workarounds_for_older_versions:
Older versions of transformers <4.27.0 or PyTorch Neuron <1.13.0
----------------------------------------------------------------
If using older versions of transformers package before 4.27.0 or PyTorch Neuron before 1.13.0, please edit the python script run_glue.py and add the following lines after the Python
imports. They set the compiler flag for transformer model type and enable data parallel training using torchrun:
.. code:: python
# Enable torchrun
import os
import torch
import torch_xla.distributed.xla_backend
from packaging import version
from transformers import __version__, Trainer
if version.parse(__version__) < version.parse("4.26.0") and os.environ.get("WORLD_SIZE"):
torch.distributed.init_process_group('xla')
# Disable DDP for torchrun
import contextlib
if version.parse(__version__) < version.parse("4.20.0"):
def _wrap_model(self, model, training=True):
model.no_sync = lambda: contextlib.nullcontext()
return model
else:
def _wrap_model(self, model, training=True, dataloader=None):
model.no_sync = lambda: contextlib.nullcontext()
return model
Trainer._wrap_model = _wrap_model
# Workaround for NaNs seen with transformers version >= 4.21.0
# https://github.com/aws-neuron/aws-neuron-sdk/issues/593
import transformers
if os.environ.get("XLA_USE_BF16") or os.environ.get("XLA_DOWNCAST_BF16"):
transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16
.. _known_issues:
Known issues and limitations
----------------------------
The following are currently known issues:
- Long compilation times: this can be alleviated with
``neuron_parallel_compile`` tool to extract graphs from a short trial run and
compile them in parallel ahead of the actual run, as shown above.
- When precompiling using batch size of 16 on trn1.2xlarge, you will see ``ERROR ||PARALLEL_COMPILE||: parallel compilation with neuronx-cc exited with error.Received error code: -9``. To workaround this error, please set NEURON_PARALLEL_COMPILE_MAX_RETRIES=1 in the environment.
- With release 2.6 and transformers==4.25.1,
using ``neuron_parallel_compile`` tool to run ``run_glue.py`` script
with both train and evaluation options (``--do_train`` and ``--do_eval``), you will encounter harmless error
``ValueError: Target is multiclass but average='binary'``
- Reduced accuracy for RoBerta-Large is seen with Neuron PyTorch 1.12 (release 2.6) in FP32 mode with compiler BF16 autocast.
The workaround is to set NEURON_CC_FLAGS="--auto-cast none" or set NEURON_RT_STOCHASTIC_ROUNDING_EN=1.
- When using DDP in PT 1.13, compilation of one graph will fail with "Killed" error message for ``bert-large-uncased``. For ``bert-base-cased``, the final MRPC evaluation accuracy is 31% which is lower than expected. These issues are being investigated and will be fixed in an upcoming release. For now, DDP is disabled with the workaround shown above in :ref:`multi_worker_training`.
- When using DDP in PT 1.13 with neuron_parallel_compile precompilation, you will hit an error ``Rank 1 has 393 params, while rank 0 has inconsistent 0 params.``. To workaround this error, add the follow code snippet at the top of ``run_glue.py`` to skip the problematic shape verification code during precompilation:
.. code:: python
import os
if os.environ.get("NEURON_EXTRACT_GRAPHS_ONLY", None):
import torch.distributed as dist
_verify_param_shape_across_processes = lambda process_group, tensors, logger=None: True
- Variable input sizes: When fine-tune models such as dslim/bert-base-NER using the `token-classification example <https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification>`__, you may encounter timeouts (lots of "socket.h:524 CCOM WARN Timeout waiting for RX" messages) and execution hang. This occurs because NER dataset has different sample sizes, which causes many recompilations and compiled graph (NEFF) reloads. Furthermore, different data parallel workers can execute different compiled graph. This multiple-program multiple-data behavior is currently unsupported. To workaround this issue, please pad to maximum length using the Trainer API option ``--pad_to_max_length``.
- When running HuggingFace GPT fine-tuning with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you might see NaNs in the loss immediately at the first step. This issue occurs due to large negative constants used to implement attention masking (https://github.com/huggingface/transformers/pull/17306). To workaround this issue, please use transformers version <= 4.20.0.
- When using Trainer API option --bf16, you will see "RuntimeError: No CUDA GPUs are available". To workaround this error, please add "import torch; torch.cuda.is_bf16_supported = lambda: True" to the Python script (i.e. run_glue.py). (Trainer API option --fp16 is not yet supported).
The following are resolved issues:
- Using ``neuron_parallel_compile`` tool to run ``run_glue.py`` script
with both train and evaluation options (``--do_train`` and ``--do_eval``), you will
encounter INVALID_ARGUMENT error. To avoid this, only enable train for parallel
compile (``--do_train``). This will cause compilations during evaluation step.
The INVALID_ARGUMENT error is fixed in release 2.6 together with latest transformers package version 4.25.1.
- When running HuggingFace BERT (any size) fine-tuning tutorial or pretraining tutorial with transformers version >= 4.21.0 and < 4.25.1 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you will see NaNs in the loss immediately at the first step. More details on the issue can be found at `pytorch/xla#4152 <https://github.com/pytorch/xla/issues/4152>`_. The workaround is to use transformers version < 4.21.0 or >= 4.25.1, or add ``transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16`` to your Python script (i.e. run_glue.py).
- Some recompilation is seen at the epoch boundary even after ``neuron_parallel_compile`` is used. This can be fixed by using the same number of epochs both during precompilation and the actual run.
- When running multi-worker training, you may see the process getting killed at the time of model saving on trn1.2xlarge.
This happens because the transformers ``trainer.save_model`` api uses ``xm.save`` for saving models.
This api is known to cause high host memory usage in multi-worker setting `see Saving and Loading XLA Tensors in <https://github.com/pytorch/xla/blob/master/API_GUIDE.md>`__ . Coupled with a compilation
at the same time results in a host OOM. To avoid this issue, we can: Precompile all the graphs in multi-worker
training. This can be done by running the multi-worker training first with ``neuron_parallel_compile <script>``
followed by the actual training. This would avoid the compilation at model save during actual training.
</pre></body></html>
|
2023-09-29T20:54:47.530Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.rst.txt
|
```
.. _neuron-cc-ops-pytorch:
PyTorch Neuron (``torch-neuron``) Supported operators
=====================================================
Current operator lists may be generated with these commands inside
python:
.. code:: python
import torch.neuron
print(*torch.neuron.get_supported_operations(), sep='\n')
.. _pytorch-neuron-release-2130:
PyTorch Neuron release [package version 1.*.*.2.9.1.0, SDK 2.13.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 08/28/2023
Added support for new operators:
- ``aten::clamp_min``
- ``aten::clamp_max``
.. _pytorch-neuron-release-2900:
PyTorch Neuron release [2.9.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/28/2023
Added support for new operators:
- ``aten::tensordot``
- ``aten::adaptive_avg_pool1d``
- ``aten::prelu``
- ``aten::reflection_pad2d``
- ``aten::baddbmm``
- ``aten::repeat``
.. _pytorch-neuron-release-2500:
PyTorch Neuron release [2.5.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
Added support for new operators:
- ``aten::threshold``
- ``aten::roll``
- ``aten::instance_norm``
- ``aten::amin``
- ``aten::amax``
- ``aten::new_empty``
- ``aten::new_ones``
- ``aten::tril``
- ``aten::triu``
- ``aten::zero_``
- ``aten::all``
- ``aten::broadcast_tensors``
- ``aten::broadcast_to``
- ``aten::logical_and``
- ``aten::logical_not``
- ``aten::logical_or``
- ``aten::logical_xor``
- ``aten::_convolution_mode``
Added **limited** support for new operators:
- LSTM Operations. See: :ref:`torch_neuron_lstm_support`
- ``aten::lstm``
- ``aten::_pack_padded_sequence``
- ``aten::_pad_packed_sequence``
- ``aten::norm``: Supported when ``p`` argument is one of (``1``, ``2``, ``inf``, ``-inf``, ``'fro'``)
.. _pytorch-neuron-release-2200:
PyTorch Neuron release [2.2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
Added support for new operators:
- ``aten::max_pool2d_with_indices``: Fully supported (Was previously supported only when indices were unused).
.. _pytorch-neuron-release-2170:
PyTorch Neuron release [2.1.7.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
Added support for new operators:
* ``aten::bucketize``
* ``aten::any``
* ``aten::remainder``
* ``aten::clip``
* ``aten::repeat_interleave``
* ``aten::tensor_split``
* ``aten::split_with_sizes``
* ``aten::isnan``
* ``aten::embedding_renorm_``
* ``aten::dot``
* ``aten::mv``
* ``aten::hardsigmoid``
* ``aten::hardswish``
* ``aten::trunc``
* ``aten::one_hot``: Supported when ``num_classes`` is known at trace time.
The dynamic version of this operation when ``num_classes = -1`` is not supported.
* ``aten::adaptive_max_pool1d``
* ``aten::adaptive_max_pool2d``
.. _pytorch-neuron-release-205360:
PyTorch Neuron Release [2.0.536.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- The following are operators with limited support on Neuron. Unlike fully
supported operators, these operators are not returned when using
:func:`torch_neuron.get_supported_operations`. See each operator
description for conditional support:
- ``aten::max_pool2d_with_indices`` - Supported when indices outputs are not used by a downstream operation. This allows the operation to be compiled to Neuron when it is equivalent to an ``aten::max_pool2d``.
- ``aten::max_pool3d_with_indices`` - Supported when indices outputs are not used by a downstream operation. This allows the operation to be compiled to Neuron when it is equivalent to an ``aten::max_pool3d``.
- ``aten::where`` - Supported when used as a conditional selection (3-argument variant). Unsupported when used to generate a dynamic list of indices (1-argument variant). See :func:`torch.where`.
.. _pytorch-neuron-release-203180:
PyTorch Neuron Release [2.0.318.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::empty_like``
- ``aten::log``
- ``aten::type_as``
- ``aten::movedim``
- ``aten::einsum``
- ``aten::argmax``
- ``aten::min``
- ``aten::argmin``
- ``aten::abs``
- ``aten::cos``
- ``aten::sin``
- ``aten::linear``
- ``aten::pixel_shuffle``
- ``aten::group_norm``
- ``aten::_weight_norm``
.. _pytorch-neuron-release-15210:
PyTorch Neuron Release [1.5.21.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1570:
PyTorch Neuron Release [1.5.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::erf``
- ``prim::DictConstruct``
.. _pytorch-neuron-release-1410:
PyTorch Neuron Release [1.4.1.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1350:
PyTorch Neuron Release [1.3.5.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::numel``
- ``aten::ones_like``
- ``aten::reciprocal``
- ``aten::topk``
.. _pytorch-neuron-release-12160:
PyTorch Neuron Release [1.2.16.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-12150:
PyTorch Neuron Release [1.2.15.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1230:
PyTorch Neuron Release [1.2.3.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::silu``
- ``aten::zeros_like``
.. _pytorch-neuron-release-1170:
PyTorch Neuron Release [1.1.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::_shape_as_tensor``
- ``aten::chunk``
- ``aten::empty``
- ``aten::masked_fill``
.. _pytorch-neuron-release-10240450:
PyTorch Neuron Release [1.0.24045.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::__and__``
- ``aten::bmm``
- ``aten::clone``
- ``aten::expand_as``
- ``aten::fill_``
- ``aten::floor_divide``
- ``aten::full``
- ``aten::hardtanh``
- ``aten::hardtanh_``
- ``aten::le``
- ``aten::leaky_relu``
- ``aten::lt``
- ``aten::mean``
- ``aten::ne``
- ``aten::softplus``
- ``aten::unbind``
- ``aten::upsample_bilinear2d``
.. _pytorch-neuron-release-10172000:
PyTorch Neuron Release [1.0.1720.00]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::constant_pad_nd``
- ``aten::meshgrid``
.. _pytorch-neuron-release-1015320:
PyTorch Neuron Release [1.0.1532.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::ones``
.. _pytorch-neuron-release-1015220:
PyTorch Neuron Release [1.0.1522.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1013860:
PyTorch Neuron Release [1.0.1386.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::ceil``
- ``aten::clamp``
- ``aten::eq``
- ``aten::exp``
- ``aten::expand_as``
- ``aten::flip``
- ``aten::full_like``
- ``aten::ge``
- ``aten::gt``
- ``aten::log2``
- ``aten::log_softmax``
- ``aten::max``
- ``aten::neg``
- ``aten::relu``
- ``aten::rsqrt``
- ``aten::scalarImplicit``
- ``aten::sqrt``
- ``aten::squeeze``
- ``aten::stack``
- ``aten::sub``
- ``aten::sum``
- ``aten::true_divide``
- ``aten::upsample_nearest2d``
- ``prim::Constant``
- ``prim::GetAttr``
- ``prim::ImplicitTensorToNum``
- ``prim::ListConstruct``
- ``prim::ListUnpack``
- ``prim::NumToTensor``
- ``prim::TupleConstruct``
- ``prim::TupleUnpack``
Please note, primitives are included in this list from this release.
.. _pytorch-neuron-release-1011680:
PyTorch Neuron Release [1.0.1168.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::ScalarImplicit``
.. _pytorch-neuron-release-1010010:
PyTorch Neuron Release [1.0.1001.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::detach``
- ``aten::floor``
- ``aten::gelu``
- ``aten::pow``
- ``aten::sigmoid``
- ``aten::split``
Remove support for operators:
- ``aten::embedding``: Does not meet **performance** criteria
- ``aten::erf``: Error function does not meet **accuracy** criteria
- ``aten::tf_dtype_from_torch``: Internal support function, not an operator
.. _pytorch-neuron-release-108250:
PyTorch Neuron Release [1.0.825.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-107630:
PyTorch Neuron Release [1.0.763.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::Int``
- ``aten::arange``
- ``aten::contiguous``
- ``aten::div``
- ``aten::embedding``
- ``aten::erf``
- ``aten::expand``
- ``aten::eye``
- ``aten::index_select``
- ``aten::layer_norm``
- ``aten::matmul``
- ``aten::mm``
- ``aten::permute``
- ``aten::reshape``
- ``aten::rsub``
- ``aten::select``
- ``aten::size``
- ``aten::slice``
- ``aten::softmax``
- ``aten::tf_dtype_from_torch``
- ``aten::to``
- ``aten::transpose``
- ``aten::unsqueeze``
- ``aten::view``
- ``aten::zeros``
Remove support for operators:
- ``aten::tf_broadcastable_slice``: Internal support function, not an operator
- ``aten::tf_padding``: Internal support function, not an operator
These operators were already supported previously:
- ``aten::_convolution``
- ``aten::adaptive_avg_pool2d``
- ``aten::add``
- ``aten::add_``
- ``aten::addmm``
- ``aten::avg_pool2d``
- ``aten::batch_norm``
- ``aten::cat``
- ``aten::dimension_value``
- ``aten::dropout``
- ``aten::flatten``
- ``aten::max_pool2d``
- ``aten::mul``
- ``aten::relu_``
- ``aten::t``
- ``aten::tanh``
- ``aten::values``
- ``prim::Constant``
- ``prim::GetAttr``
- ``prim::ListConstruct``
- ``prim::ListUnpack``
- ``prim::TupleConstruct``
- ``prim::TupleUnpack``
.. _pytorch-neuron-release-106720:
PyTorch Neuron Release [1.0.672.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-105520:
PyTorch Neuron Release [1.0.552.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::_convolution``
- ``aten::adaptive_avg_pool2d``
- ``aten::add``
- ``aten::add_``
- ``aten::addmm``
- ``aten::avg_pool2d``
- ``aten::batch_norm``
- ``aten::cat``
- ``aten::dimension_value``
- ``aten::dropout``
- ``aten::flatten``
- ``aten::max_pool2d``
- ``aten::mul``
- ``aten::relu_``
- ``aten::t``
- ``aten::tanh``
- ``aten::tf_broadcastable_slice``
- ``aten::tf_padding``
- ``aten::values``
- ``prim::Constant``
- ``prim::GetAttr``
- ``prim::ListConstruct``
- ``prim::ListUnpack``
- ``prim::TupleConstruct``
- ``prim::TupleUnpack``
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-cc-ops-pytorch:
PyTorch Neuron (``torch-neuron``) Supported operators
=====================================================
Current operator lists may be generated with these commands inside
python:
.. code:: python
import torch.neuron
print(*torch.neuron.get_supported_operations(), sep='\n')
.. _pytorch-neuron-release-2130:
PyTorch Neuron release [package version 1.*.*.2.9.1.0, SDK 2.13.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 08/28/2023
Added support for new operators:
- ``aten::clamp_min``
- ``aten::clamp_max``
.. _pytorch-neuron-release-2900:
PyTorch Neuron release [2.9.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/28/2023
Added support for new operators:
- ``aten::tensordot``
- ``aten::adaptive_avg_pool1d``
- ``aten::prelu``
- ``aten::reflection_pad2d``
- ``aten::baddbmm``
- ``aten::repeat``
.. _pytorch-neuron-release-2500:
PyTorch Neuron release [2.5.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
Added support for new operators:
- ``aten::threshold``
- ``aten::roll``
- ``aten::instance_norm``
- ``aten::amin``
- ``aten::amax``
- ``aten::new_empty``
- ``aten::new_ones``
- ``aten::tril``
- ``aten::triu``
- ``aten::zero_``
- ``aten::all``
- ``aten::broadcast_tensors``
- ``aten::broadcast_to``
- ``aten::logical_and``
- ``aten::logical_not``
- ``aten::logical_or``
- ``aten::logical_xor``
- ``aten::_convolution_mode``
Added **limited** support for new operators:
- LSTM Operations. See: :ref:`torch_neuron_lstm_support`
- ``aten::lstm``
- ``aten::_pack_padded_sequence``
- ``aten::_pad_packed_sequence``
- ``aten::norm``: Supported when ``p`` argument is one of (``1``, ``2``, ``inf``, ``-inf``, ``'fro'``)
.. _pytorch-neuron-release-2200:
PyTorch Neuron release [2.2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
Added support for new operators:
- ``aten::max_pool2d_with_indices``: Fully supported (Was previously supported only when indices were unused).
.. _pytorch-neuron-release-2170:
PyTorch Neuron release [2.1.7.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
Added support for new operators:
* ``aten::bucketize``
* ``aten::any``
* ``aten::remainder``
* ``aten::clip``
* ``aten::repeat_interleave``
* ``aten::tensor_split``
* ``aten::split_with_sizes``
* ``aten::isnan``
* ``aten::embedding_renorm_``
* ``aten::dot``
* ``aten::mv``
* ``aten::hardsigmoid``
* ``aten::hardswish``
* ``aten::trunc``
* ``aten::one_hot``: Supported when ``num_classes`` is known at trace time.
The dynamic version of this operation when ``num_classes = -1`` is not supported.
* ``aten::adaptive_max_pool1d``
* ``aten::adaptive_max_pool2d``
.. _pytorch-neuron-release-205360:
PyTorch Neuron Release [2.0.536.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- The following are operators with limited support on Neuron. Unlike fully
supported operators, these operators are not returned when using
:func:`torch_neuron.get_supported_operations`. See each operator
description for conditional support:
- ``aten::max_pool2d_with_indices`` - Supported when indices outputs are not used by a downstream operation. This allows the operation to be compiled to Neuron when it is equivalent to an ``aten::max_pool2d``.
- ``aten::max_pool3d_with_indices`` - Supported when indices outputs are not used by a downstream operation. This allows the operation to be compiled to Neuron when it is equivalent to an ``aten::max_pool3d``.
- ``aten::where`` - Supported when used as a conditional selection (3-argument variant). Unsupported when used to generate a dynamic list of indices (1-argument variant). See :func:`torch.where`.
.. _pytorch-neuron-release-203180:
PyTorch Neuron Release [2.0.318.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::empty_like``
- ``aten::log``
- ``aten::type_as``
- ``aten::movedim``
- ``aten::einsum``
- ``aten::argmax``
- ``aten::min``
- ``aten::argmin``
- ``aten::abs``
- ``aten::cos``
- ``aten::sin``
- ``aten::linear``
- ``aten::pixel_shuffle``
- ``aten::group_norm``
- ``aten::_weight_norm``
.. _pytorch-neuron-release-15210:
PyTorch Neuron Release [1.5.21.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1570:
PyTorch Neuron Release [1.5.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::erf``
- ``prim::DictConstruct``
.. _pytorch-neuron-release-1410:
PyTorch Neuron Release [1.4.1.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1350:
PyTorch Neuron Release [1.3.5.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::numel``
- ``aten::ones_like``
- ``aten::reciprocal``
- ``aten::topk``
.. _pytorch-neuron-release-12160:
PyTorch Neuron Release [1.2.16.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-12150:
PyTorch Neuron Release [1.2.15.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1230:
PyTorch Neuron Release [1.2.3.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::silu``
- ``aten::zeros_like``
.. _pytorch-neuron-release-1170:
PyTorch Neuron Release [1.1.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::_shape_as_tensor``
- ``aten::chunk``
- ``aten::empty``
- ``aten::masked_fill``
.. _pytorch-neuron-release-10240450:
PyTorch Neuron Release [1.0.24045.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::__and__``
- ``aten::bmm``
- ``aten::clone``
- ``aten::expand_as``
- ``aten::fill_``
- ``aten::floor_divide``
- ``aten::full``
- ``aten::hardtanh``
- ``aten::hardtanh_``
- ``aten::le``
- ``aten::leaky_relu``
- ``aten::lt``
- ``aten::mean``
- ``aten::ne``
- ``aten::softplus``
- ``aten::unbind``
- ``aten::upsample_bilinear2d``
.. _pytorch-neuron-release-10172000:
PyTorch Neuron Release [1.0.1720.00]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::constant_pad_nd``
- ``aten::meshgrid``
.. _pytorch-neuron-release-1015320:
PyTorch Neuron Release [1.0.1532.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::ones``
.. _pytorch-neuron-release-1015220:
PyTorch Neuron Release [1.0.1522.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-1013860:
PyTorch Neuron Release [1.0.1386.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::ceil``
- ``aten::clamp``
- ``aten::eq``
- ``aten::exp``
- ``aten::expand_as``
- ``aten::flip``
- ``aten::full_like``
- ``aten::ge``
- ``aten::gt``
- ``aten::log2``
- ``aten::log_softmax``
- ``aten::max``
- ``aten::neg``
- ``aten::relu``
- ``aten::rsqrt``
- ``aten::scalarImplicit``
- ``aten::sqrt``
- ``aten::squeeze``
- ``aten::stack``
- ``aten::sub``
- ``aten::sum``
- ``aten::true_divide``
- ``aten::upsample_nearest2d``
- ``prim::Constant``
- ``prim::GetAttr``
- ``prim::ImplicitTensorToNum``
- ``prim::ListConstruct``
- ``prim::ListUnpack``
- ``prim::NumToTensor``
- ``prim::TupleConstruct``
- ``prim::TupleUnpack``
Please note, primitives are included in this list from this release.
.. _pytorch-neuron-release-1011680:
PyTorch Neuron Release [1.0.1168.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::ScalarImplicit``
.. _pytorch-neuron-release-1010010:
PyTorch Neuron Release [1.0.1001.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::detach``
- ``aten::floor``
- ``aten::gelu``
- ``aten::pow``
- ``aten::sigmoid``
- ``aten::split``
Remove support for operators:
- ``aten::embedding``: Does not meet **performance** criteria
- ``aten::erf``: Error function does not meet **accuracy** criteria
- ``aten::tf_dtype_from_torch``: Internal support function, not an operator
.. _pytorch-neuron-release-108250:
PyTorch Neuron Release [1.0.825.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-107630:
PyTorch Neuron Release [1.0.763.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::Int``
- ``aten::arange``
- ``aten::contiguous``
- ``aten::div``
- ``aten::embedding``
- ``aten::erf``
- ``aten::expand``
- ``aten::eye``
- ``aten::index_select``
- ``aten::layer_norm``
- ``aten::matmul``
- ``aten::mm``
- ``aten::permute``
- ``aten::reshape``
- ``aten::rsub``
- ``aten::select``
- ``aten::size``
- ``aten::slice``
- ``aten::softmax``
- ``aten::tf_dtype_from_torch``
- ``aten::to``
- ``aten::transpose``
- ``aten::unsqueeze``
- ``aten::view``
- ``aten::zeros``
Remove support for operators:
- ``aten::tf_broadcastable_slice``: Internal support function, not an operator
- ``aten::tf_padding``: Internal support function, not an operator
These operators were already supported previously:
- ``aten::_convolution``
- ``aten::adaptive_avg_pool2d``
- ``aten::add``
- ``aten::add_``
- ``aten::addmm``
- ``aten::avg_pool2d``
- ``aten::batch_norm``
- ``aten::cat``
- ``aten::dimension_value``
- ``aten::dropout``
- ``aten::flatten``
- ``aten::max_pool2d``
- ``aten::mul``
- ``aten::relu_``
- ``aten::t``
- ``aten::tanh``
- ``aten::values``
- ``prim::Constant``
- ``prim::GetAttr``
- ``prim::ListConstruct``
- ``prim::ListUnpack``
- ``prim::TupleConstruct``
- ``prim::TupleUnpack``
.. _pytorch-neuron-release-106720:
PyTorch Neuron Release [1.0.672.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No change
.. _pytorch-neuron-release-105520:
PyTorch Neuron Release [1.0.552.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added support for new operators:
- ``aten::_convolution``
- ``aten::adaptive_avg_pool2d``
- ``aten::add``
- ``aten::add_``
- ``aten::addmm``
- ``aten::avg_pool2d``
- ``aten::batch_norm``
- ``aten::cat``
- ``aten::dimension_value``
- ``aten::dropout``
- ``aten::flatten``
- ``aten::max_pool2d``
- ``aten::mul``
- ``aten::relu_``
- ``aten::t``
- ``aten::tanh``
- ``aten::tf_broadcastable_slice``
- ``aten::tf_padding``
- ``aten::values``
- ``prim::Constant``
- ``prim::GetAttr``
- ``prim::ListConstruct``
- ``prim::ListUnpack``
- ``prim::TupleConstruct``
- ``prim::TupleUnpack``
</pre></body></html>
|
2023-09-29T20:54:47.576Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/misc-inference-torch-neuron.rst.txt
|
```
Misc (``torch-neuron``)
=======================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch
/frameworks/torch/torch-neuron/troubleshooting-guide
/release-notes/torch/torch-neuron/torch-neuron
.. include:: /frameworks/torch/torch-neuron/misc-inference-torch-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (``torch-neuron``)
=======================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch
/frameworks/torch/torch-neuron/troubleshooting-guide
/release-notes/torch/torch-neuron/torch-neuron
.. include:: /frameworks/torch/torch-neuron/misc-inference-torch-neuron.txt</pre></body></html>
|
2023-09-29T20:54:47.600Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.rst.txt
|
```
.. _torch_neuron_core_placement_guide:
PyTorch Neuron (``torch-neuron``) Core Placement
================================================
This programming guide describes the available techniques and APIs to be able
to allocate NeuronCores to a process and place models onto specific NeuronCores.
In order of precedence, the current recommendation is to use the following
placement techniques:
1. For most regular models, default core placement should be used in
conjunction with ``NEURON_RT_NUM_CORES`` (:ref:`torch_placement_default`)
2. For more specific core placement for NeuronCore Pipelined models, then
``NEURONCORE_GROUP_SIZES`` should be used (:ref:`torch_placement_ncg`).
3. Finally, for even more granular control, then the experimental
explicit placement APIs may be used (:ref:`torch_placement_explicit`).
.. contents:: Table of Contents
:depth: 3
The following guide will assume a machine with 8 NeuronCores:
- NeuronCores will use the notation ``nc0``, ``nc1``, etc.
- NeuronCore Groups will use the notation ``ncg0``, ``ncg1`` etc.
- Models will use the notation ``m0``, ``m1`` etc.
NeuronCores, NeuronCore Groups, and model allocations will be displayed in
the following format:
.. raw:: html
:file: images/0-0-legend.svg
Note that the actual cores that are visible to the process can be adjusted
according to the :ref:`nrt-configuration`.
NeuronCore Pipeline
-------------------
A key concept to understand the intent behind certain core placement strategies
is NeuronCore Pipelining (See :ref:`neuroncore-pipeline`). NeuronCore Pipelining
allows a model to be automatically split into pieces and executed on different
NeuronCores.
For most models only 1 NeuronCore will be required for execution. A model will
**only** require more than one NeuronCore when using NeuronCore Pipeline.
When model pipelining is enabled, the model is split between multiple
NeuronCores and data is transferred between them. For example, if the compiler
flag ``--neuroncore-pipeline-cores 4`` is used, this splits the model into
4 pieces to be executed on 4 separate NeuronCores.
.. _torch_placement_default:
Default Core Allocation & Placement
-----------------------------------
The most basic requirement of an inference application is to be able to place a
single model on a single NeuronCore. More complex applications may use multiple
NeuronCores or even multiple processes each executing different models. The
important thing to note about designing an inference application is that a
single NeuronCore will always be allocated to a single process. *Processes do
not share NeuronCores*. Different configurations can be used to ensure that
an application process has enough NeuronCores allocated to execute its model(s):
- Default: A process will attempt to take ownership of **all NeuronCores**
visible on the instance. This should be used when an instance is only running
a single inference process since no other process will be allowed to take
ownership of any NeuronCores.
- ``NEURON_RT_NUM_CORES``: Specify the **number of NeuronCores** to allocate
to the process. This places no restrictions on which NeuronCores will be used,
however, the resulting NeuronCores will always be contiguous. This should be
used in multi-process applications where each process should only use a subset
of NeuronCores.
- ``NEURON_RT_VISIBLE_CORES``: Specifies exactly **which NeuronCores** are
allocated to the process by index. Similar to ``NEURON_RT_NUM_CORES``, this
can be used in multi-process applications where each process should only use a
subset of NeuronCores. This provides more fined-grained controls over the
exact NeuronCores that are allocated to a given process.
- ``NEURONCORE_GROUP_SIZES``: Specifies a number of **NeuronCore Groups** which
are allocated to the process. This is described in more detail in the
:ref:`torch_placement_ncg` section.
See the :ref:`nrt-configuration` for more environment variable details.
Example: Default
^^^^^^^^^^^^^^^^
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
.. raw:: html
:file: images/0-1-default-2.svg
With no environment configuration, the process will take ownership of all
NeuronCores. In this example, only two of the NeuronCores are used by the
process and the remaining are allocated but left idle.
Example: ``NEURON_RT_NUM_CORES``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
.. raw:: html
:file: images/0-2-default-rt-num-cores.svg
Since there is no other process on the instance, only the first 2 NeuronCores
will be acquired by the process. Models load in a simple linear order to the
least used NeuronCores.
Example: ``NEURON_RT_VISIBLE_CORES``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_VISIBLE_CORES = '4-5'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc4
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc5
.. raw:: html
:file: images/0-3-default-rt-visible-cores.svg
Unlike ``NEURON_RT_NUM_CORES``, setting the visible NeuronCores allows the
process to take control of a specific contiguous set. This allows an application
to have a more fine-grained control of where models will be placed.
Example: Overlapping Models
^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_VISIBLE_CORES = '0-1'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
m2 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
.. raw:: html
:file: images/0-4-default-overlap-model-2.svg
.. raw:: html
:file: images/0-4-default-overlap.svg
This shows how models may share NeuronCores but the default model placement
will attempt to evenly distribute NeuronCore usage rather than overlapping all
models on a single NeuronCore.
Example: Multiple Processes
^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
In this example, if the script is run **twice**, the following allocations
will be made:
.. raw:: html
:file: images/0-5-default-multiprocess.svg
Note that each process will take ownership of as many NeuronCores as is
specified by the ``NEURON_RT_NUM_CORES`` configuration.
.. _torch_placement_ncg:
NEURONCORE_GROUP_SIZES
----------------------
.. important::
The use of explicit core placement should only be used when a specific
performance goal is required. By default ``torch-neuron`` places models on
the **least used** NeuronCores. This should be optimal for most
applications.
Secondly, ``NEURONCORE_GROUP_SIZES`` is being deprecated in a future
release and should be avoided in favor of newer placement methods.
Use ``NEURON_RT_NUM_CORES`` or ``NEURON_RT_VISIBLE_CORES`` with default
placement if possible (See :ref:`torch_placement_default`)
In the current release of NeuronSDK, the most well-supported method of placing
models onto specific NeuronCores is to use the ``NEURONCORE_GROUP_SIZES``
environment variable. This will define a set of "NeuronCore Groups" for the
application process.
NeuronCore Groups are *contiguous sets of NeuronCores* that are allocated to
a given process. Creating groups allows an application to ensure that a
model has a defined set of NeuronCores that will always be allocated to it.
Note that NeuronCore Groups *can* be used to allocate non-pipelined models
(those requiring exactly 1 NeuronCore) to specific NeuronCores but this is
not the primary intended use. The intended use of NeuronCore Groups is to
ensure pipelined models (those requiring >1 NeuronCore) have exclusive access
to a specific set of contiguous NeuronCores.
In the cases where models are being used *without* NeuronCore Pipeline, the
general recommendation is to use default placement
(See :ref:`torch_placement_default`).
The following section demonstrates how ``NEURONCORE_GROUP_SIZES`` can be used
and the issues that may arise.
Example: Single NeuronCore Group
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the example where one model requires 4 NeuronCores, the correct environment
configuration would be:
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '4'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-4-neuron-pipeline-cores.pt') # Loads to nc0-nc3
.. raw:: html
:file: images/1-ncg-4.svg
This is the most basic usage of a NeuronCore Group. The environment setup
causes the process to take control of 4 NeuronCores and then the script loads
a model compiled with a NeuronCore Pipeline size of 4 to the first group.
Example: Multiple NeuronCore Groups
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
With more complicated configurations, the intended use of
``NEURONCORE_GROUP_SIZES`` is to create 1 Group per model with the correct size
to ensure that the models are placed on the intended NeuronCores. Similarly, the
environment would need to be configured to create a NeuronCore Group for each
model:
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '3,4,1'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-3-neuron-pipeline-cores.pt') # Loads to nc0-nc2
m1 = torch.jit.load('model-with-4-neuron-pipeline-cores.pt') # Loads to nc3-nc6
m2 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc7
.. raw:: html
:file: images/2-ncg-3-4-1.svg
Issue: Overlapping Models with Differing Model Sizes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When multiple models are loaded to a single NeuronCore Group, this can cause
unintended inefficiencies. A single model is only intended to span a single
NeuronCore Group. Applications with many models of varying sizes can be
restricted by NeuronCore Group configurations since the most optimal model
layout may require more fine-grained controls.
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '2,2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc2-nc3
m2 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m3 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc2
m4 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
.. raw:: html
:file: images/3-models-m4-0-warning.svg
.. raw:: html
:file: images/3-models-m2-0-m3-2.svg
.. raw:: html
:file: images/3-ncg-2-2.svg
Here the ``NEURONCORE_GROUP_SIZES`` does not generate an optimal layout
because placement strictly follows the layout of NeuronCore Groups. A
potentially more optimal layout would be to place ``m4`` onto ``nc1``. In this
case, since a pipelined model will not be able to have exclusive access to a set
of NeuronCores, the default NeuronCore placement (no NeuronCore Groups
specified) would more evenly distribute the models.
Also note here that this is an example of where the order of model loads
affects which model is assigned to which NeuronCore Group. If the order of the
load statements is changed, models may be assigned to different NeuronCore
Groups.
Issue: Incompatible Model Sizes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another problem occurs when attempting to place a model which does not evenly
fit into a single group:
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '2,2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc2-nc3
m2 = torch.jit.load('model-with-3-neuron-pipeline-cores.pt') # Loads to nc0-nc2
.. raw:: html
:file: images/4-models-m2-0-2-warning.svg
.. raw:: html
:file: images/3-ncg-2-2.svg
The model will be placed *across* NeuronCore Groups since there is no obvious
group to assign the model to according to the environment variable
configuration. Depending on the individual model and application requirements,
the placement here may not be optimal.
Issue: Multiple Model Copies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is common in inference serving applications to use multiple replicas of a
single model across different NeuronCores. This allows the hardware to be fully
utilized to maximize throughput. In this scenario, when using NeuronCore
Groups, the only way to replicate a model on multiple NeuronCores is to create a
*new model* object. In the example below, 4 models loads are performed to place
a model in each NeuronCore Group.
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '2,2,2,2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
models = list()
for _ in range(4):
model = torch.jit.load('model-with-2-neuron-pipeline-cores.pt')
models.append(model)
.. raw:: html
:file: images/3-ncg-2-2-2-2-copies.svg
The largest consequence of this type of model allocation is that the application
code is responsible for routing inference requests to models. There are a
variety of ways to implement the inference switching but in all cases routing
logic needs to be implemented in the application code.
Issue Summary
^^^^^^^^^^^^^
The use of ``NEURONCORE_GROUP_SIZES`` has the following problems:
- **Variable Sized Models**: Models which require crossing NeuronCore Group
boundaries may be placed poorly. This means group configuration limits the
size of which models can be loaded.
- **Model Load Order**: Models are loaded to NeuronCore Groups greedily. This
means that the order of model loads can potentially negatively affect
application performance by causing unintentional overlap.
- **Implicit Placement**: NeuronCore Groups cannot be explicitly chosen in the
application code.
- **Manual Replication**: When loading multiple copies of a model to different
NeuronCore Groups, this requires that multiple model handles are used.
.. _torch_placement_explicit:
Experimental: Explicit Core Placement
-------------------------------------
To address the limitations of ``NEURONCORE_GROUP_SIZES``, a new set of APIs has
been added which allows specific NeuronCores to be chosen by the application
code. These can be found in the :ref:`torch_neuron_core_placement_api` documentation.
Example: Manual Core Selection
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The most direct usage of the placement APIs is to manually select the
start NeuronCore that each model is loaded to. This will automatically use as
many NeuronCores as is necessary for that model (1 for most models, >1 for
NeuronCore Pipelines models).
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '4'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
# NOTE: Order of loads does NOT matter
with torch_neuron.experimental.neuron_cores_context(2):
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc2-nc3
with torch_neuron.experimental.neuron_cores_context(0):
m2 = torch.jit.load('model-with-3-neuron-pipeline-cores.pt') # Loads to nc0-nc2
with torch_neuron.experimental.neuron_cores_context(0):
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
with torch_neuron.experimental.neuron_cores_context(3):
m3 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc3
.. raw:: html
:file: images/5-models-m2-0-2-m3-3.svg
.. raw:: html
:file: images/5-placement.svg
Note that this directly solves the ``NEURONCORE_GROUP_SIZES`` issues of:
- **Variable Sized Models**: Now since models are directly placed on the
NeuronCores requested by the application, there is no disconnect
between the model sizes and NeuronCore Group sizes.
- **Model Load Order**: Since the NeuronCores are explicitly selected, there is
no need to be careful about the order in which models are loaded since they
can be placed deterministically regardless of the load order.
- **Implicit Placement**: Similarly, explicit placement means there is no chance
that a model will end up being allocated to an incorrect NeuronCore Group.
Example: Automatic Multicore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using explicit core placement it is possible to replicate a model to multiple
NeuronCores simultaneously. This means that a single model object within python
can utilize all available NeuronCores (or NeuronCores allocated to the process).
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '8'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
with torch_neuron.experimental.multicore_context():
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads replications to nc0-nc7
.. raw:: html
:file: images/6-multicore.svg
This addresses the last ``NEURONCORE_GROUP_SIZES`` issue of:
- **Manual Replication**: Since models can be automatically replicated to
multiple NeuronCores, this means that applications no longer need to implement
routing logic and perform multiple loads.
This API has a secondary benefit that the exact same loading logic can be used
on an ``inf1.xlarge`` or an ``inf1.6xlarge``. In either case, it will use all
of the NeuronCores that are visible to the process. This means that no special
logic needs to be coded for different instance types.
Example: Explicit Replication
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Replication is also possible with the
:func:`~torch_neuron.experimental.neuron_cores_context` API. The number of
replications is chosen by ``replications = floor(nc_count / cores_per_model)``.
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '8'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
with torch_neuron.experimental.neuron_cores_context(start_nc=2, nc_count=4):
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads replications to nc2-nc5
.. raw:: html
:file: images/7-replication.svg
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch_neuron_core_placement_guide:
PyTorch Neuron (``torch-neuron``) Core Placement
================================================
This programming guide describes the available techniques and APIs to be able
to allocate NeuronCores to a process and place models onto specific NeuronCores.
In order of precedence, the current recommendation is to use the following
placement techniques:
1. For most regular models, default core placement should be used in
conjunction with ``NEURON_RT_NUM_CORES`` (:ref:`torch_placement_default`)
2. For more specific core placement for NeuronCore Pipelined models, then
``NEURONCORE_GROUP_SIZES`` should be used (:ref:`torch_placement_ncg`).
3. Finally, for even more granular control, then the experimental
explicit placement APIs may be used (:ref:`torch_placement_explicit`).
.. contents:: Table of Contents
:depth: 3
The following guide will assume a machine with 8 NeuronCores:
- NeuronCores will use the notation ``nc0``, ``nc1``, etc.
- NeuronCore Groups will use the notation ``ncg0``, ``ncg1`` etc.
- Models will use the notation ``m0``, ``m1`` etc.
NeuronCores, NeuronCore Groups, and model allocations will be displayed in
the following format:
.. raw:: html
:file: images/0-0-legend.svg
Note that the actual cores that are visible to the process can be adjusted
according to the :ref:`nrt-configuration`.
NeuronCore Pipeline
-------------------
A key concept to understand the intent behind certain core placement strategies
is NeuronCore Pipelining (See :ref:`neuroncore-pipeline`). NeuronCore Pipelining
allows a model to be automatically split into pieces and executed on different
NeuronCores.
For most models only 1 NeuronCore will be required for execution. A model will
**only** require more than one NeuronCore when using NeuronCore Pipeline.
When model pipelining is enabled, the model is split between multiple
NeuronCores and data is transferred between them. For example, if the compiler
flag ``--neuroncore-pipeline-cores 4`` is used, this splits the model into
4 pieces to be executed on 4 separate NeuronCores.
.. _torch_placement_default:
Default Core Allocation & Placement
-----------------------------------
The most basic requirement of an inference application is to be able to place a
single model on a single NeuronCore. More complex applications may use multiple
NeuronCores or even multiple processes each executing different models. The
important thing to note about designing an inference application is that a
single NeuronCore will always be allocated to a single process. *Processes do
not share NeuronCores*. Different configurations can be used to ensure that
an application process has enough NeuronCores allocated to execute its model(s):
- Default: A process will attempt to take ownership of **all NeuronCores**
visible on the instance. This should be used when an instance is only running
a single inference process since no other process will be allowed to take
ownership of any NeuronCores.
- ``NEURON_RT_NUM_CORES``: Specify the **number of NeuronCores** to allocate
to the process. This places no restrictions on which NeuronCores will be used,
however, the resulting NeuronCores will always be contiguous. This should be
used in multi-process applications where each process should only use a subset
of NeuronCores.
- ``NEURON_RT_VISIBLE_CORES``: Specifies exactly **which NeuronCores** are
allocated to the process by index. Similar to ``NEURON_RT_NUM_CORES``, this
can be used in multi-process applications where each process should only use a
subset of NeuronCores. This provides more fined-grained controls over the
exact NeuronCores that are allocated to a given process.
- ``NEURONCORE_GROUP_SIZES``: Specifies a number of **NeuronCore Groups** which
are allocated to the process. This is described in more detail in the
:ref:`torch_placement_ncg` section.
See the :ref:`nrt-configuration` for more environment variable details.
Example: Default
^^^^^^^^^^^^^^^^
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
.. raw:: html
:file: images/0-1-default-2.svg
With no environment configuration, the process will take ownership of all
NeuronCores. In this example, only two of the NeuronCores are used by the
process and the remaining are allocated but left idle.
Example: ``NEURON_RT_NUM_CORES``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
.. raw:: html
:file: images/0-2-default-rt-num-cores.svg
Since there is no other process on the instance, only the first 2 NeuronCores
will be acquired by the process. Models load in a simple linear order to the
least used NeuronCores.
Example: ``NEURON_RT_VISIBLE_CORES``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_VISIBLE_CORES = '4-5'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc4
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc5
.. raw:: html
:file: images/0-3-default-rt-visible-cores.svg
Unlike ``NEURON_RT_NUM_CORES``, setting the visible NeuronCores allows the
process to take control of a specific contiguous set. This allows an application
to have a more fine-grained control of where models will be placed.
Example: Overlapping Models
^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_VISIBLE_CORES = '0-1'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
m2 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
.. raw:: html
:file: images/0-4-default-overlap-model-2.svg
.. raw:: html
:file: images/0-4-default-overlap.svg
This shows how models may share NeuronCores but the default model placement
will attempt to evenly distribute NeuronCore usage rather than overlapping all
models on a single NeuronCore.
Example: Multiple Processes
^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m1 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc1
In this example, if the script is run **twice**, the following allocations
will be made:
.. raw:: html
:file: images/0-5-default-multiprocess.svg
Note that each process will take ownership of as many NeuronCores as is
specified by the ``NEURON_RT_NUM_CORES`` configuration.
.. _torch_placement_ncg:
NEURONCORE_GROUP_SIZES
----------------------
.. important::
The use of explicit core placement should only be used when a specific
performance goal is required. By default ``torch-neuron`` places models on
the **least used** NeuronCores. This should be optimal for most
applications.
Secondly, ``NEURONCORE_GROUP_SIZES`` is being deprecated in a future
release and should be avoided in favor of newer placement methods.
Use ``NEURON_RT_NUM_CORES`` or ``NEURON_RT_VISIBLE_CORES`` with default
placement if possible (See :ref:`torch_placement_default`)
In the current release of NeuronSDK, the most well-supported method of placing
models onto specific NeuronCores is to use the ``NEURONCORE_GROUP_SIZES``
environment variable. This will define a set of "NeuronCore Groups" for the
application process.
NeuronCore Groups are *contiguous sets of NeuronCores* that are allocated to
a given process. Creating groups allows an application to ensure that a
model has a defined set of NeuronCores that will always be allocated to it.
Note that NeuronCore Groups *can* be used to allocate non-pipelined models
(those requiring exactly 1 NeuronCore) to specific NeuronCores but this is
not the primary intended use. The intended use of NeuronCore Groups is to
ensure pipelined models (those requiring >1 NeuronCore) have exclusive access
to a specific set of contiguous NeuronCores.
In the cases where models are being used *without* NeuronCore Pipeline, the
general recommendation is to use default placement
(See :ref:`torch_placement_default`).
The following section demonstrates how ``NEURONCORE_GROUP_SIZES`` can be used
and the issues that may arise.
Example: Single NeuronCore Group
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the example where one model requires 4 NeuronCores, the correct environment
configuration would be:
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '4'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-4-neuron-pipeline-cores.pt') # Loads to nc0-nc3
.. raw:: html
:file: images/1-ncg-4.svg
This is the most basic usage of a NeuronCore Group. The environment setup
causes the process to take control of 4 NeuronCores and then the script loads
a model compiled with a NeuronCore Pipeline size of 4 to the first group.
Example: Multiple NeuronCore Groups
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
With more complicated configurations, the intended use of
``NEURONCORE_GROUP_SIZES`` is to create 1 Group per model with the correct size
to ensure that the models are placed on the intended NeuronCores. Similarly, the
environment would need to be configured to create a NeuronCore Group for each
model:
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '3,4,1'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-3-neuron-pipeline-cores.pt') # Loads to nc0-nc2
m1 = torch.jit.load('model-with-4-neuron-pipeline-cores.pt') # Loads to nc3-nc6
m2 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc7
.. raw:: html
:file: images/2-ncg-3-4-1.svg
Issue: Overlapping Models with Differing Model Sizes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When multiple models are loaded to a single NeuronCore Group, this can cause
unintended inefficiencies. A single model is only intended to span a single
NeuronCore Group. Applications with many models of varying sizes can be
restricted by NeuronCore Group configurations since the most optimal model
layout may require more fine-grained controls.
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '2,2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc2-nc3
m2 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
m3 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc2
m4 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc0
.. raw:: html
:file: images/3-models-m4-0-warning.svg
.. raw:: html
:file: images/3-models-m2-0-m3-2.svg
.. raw:: html
:file: images/3-ncg-2-2.svg
Here the ``NEURONCORE_GROUP_SIZES`` does not generate an optimal layout
because placement strictly follows the layout of NeuronCore Groups. A
potentially more optimal layout would be to place ``m4`` onto ``nc1``. In this
case, since a pipelined model will not be able to have exclusive access to a set
of NeuronCores, the default NeuronCore placement (no NeuronCore Groups
specified) would more evenly distribute the models.
Also note here that this is an example of where the order of model loads
affects which model is assigned to which NeuronCore Group. If the order of the
load statements is changed, models may be assigned to different NeuronCore
Groups.
Issue: Incompatible Model Sizes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Another problem occurs when attempting to place a model which does not evenly
fit into a single group:
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '2,2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc2-nc3
m2 = torch.jit.load('model-with-3-neuron-pipeline-cores.pt') # Loads to nc0-nc2
.. raw:: html
:file: images/4-models-m2-0-2-warning.svg
.. raw:: html
:file: images/3-ncg-2-2.svg
The model will be placed *across* NeuronCore Groups since there is no obvious
group to assign the model to according to the environment variable
configuration. Depending on the individual model and application requirements,
the placement here may not be optimal.
Issue: Multiple Model Copies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is common in inference serving applications to use multiple replicas of a
single model across different NeuronCores. This allows the hardware to be fully
utilized to maximize throughput. In this scenario, when using NeuronCore
Groups, the only way to replicate a model on multiple NeuronCores is to create a
*new model* object. In the example below, 4 models loads are performed to place
a model in each NeuronCore Group.
**Environment Setup**:
.. code-block:: bash
export NEURONCORE_GROUP_SIZES = '2,2,2,2'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
models = list()
for _ in range(4):
model = torch.jit.load('model-with-2-neuron-pipeline-cores.pt')
models.append(model)
.. raw:: html
:file: images/3-ncg-2-2-2-2-copies.svg
The largest consequence of this type of model allocation is that the application
code is responsible for routing inference requests to models. There are a
variety of ways to implement the inference switching but in all cases routing
logic needs to be implemented in the application code.
Issue Summary
^^^^^^^^^^^^^
The use of ``NEURONCORE_GROUP_SIZES`` has the following problems:
- **Variable Sized Models**: Models which require crossing NeuronCore Group
boundaries may be placed poorly. This means group configuration limits the
size of which models can be loaded.
- **Model Load Order**: Models are loaded to NeuronCore Groups greedily. This
means that the order of model loads can potentially negatively affect
application performance by causing unintentional overlap.
- **Implicit Placement**: NeuronCore Groups cannot be explicitly chosen in the
application code.
- **Manual Replication**: When loading multiple copies of a model to different
NeuronCore Groups, this requires that multiple model handles are used.
.. _torch_placement_explicit:
Experimental: Explicit Core Placement
-------------------------------------
To address the limitations of ``NEURONCORE_GROUP_SIZES``, a new set of APIs has
been added which allows specific NeuronCores to be chosen by the application
code. These can be found in the :ref:`torch_neuron_core_placement_api` documentation.
Example: Manual Core Selection
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The most direct usage of the placement APIs is to manually select the
start NeuronCore that each model is loaded to. This will automatically use as
many NeuronCores as is necessary for that model (1 for most models, >1 for
NeuronCore Pipelines models).
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '4'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
# NOTE: Order of loads does NOT matter
with torch_neuron.experimental.neuron_cores_context(2):
m1 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc2-nc3
with torch_neuron.experimental.neuron_cores_context(0):
m2 = torch.jit.load('model-with-3-neuron-pipeline-cores.pt') # Loads to nc0-nc2
with torch_neuron.experimental.neuron_cores_context(0):
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads to nc0-nc1
with torch_neuron.experimental.neuron_cores_context(3):
m3 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads to nc3
.. raw:: html
:file: images/5-models-m2-0-2-m3-3.svg
.. raw:: html
:file: images/5-placement.svg
Note that this directly solves the ``NEURONCORE_GROUP_SIZES`` issues of:
- **Variable Sized Models**: Now since models are directly placed on the
NeuronCores requested by the application, there is no disconnect
between the model sizes and NeuronCore Group sizes.
- **Model Load Order**: Since the NeuronCores are explicitly selected, there is
no need to be careful about the order in which models are loaded since they
can be placed deterministically regardless of the load order.
- **Implicit Placement**: Similarly, explicit placement means there is no chance
that a model will end up being allocated to an incorrect NeuronCore Group.
Example: Automatic Multicore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using explicit core placement it is possible to replicate a model to multiple
NeuronCores simultaneously. This means that a single model object within python
can utilize all available NeuronCores (or NeuronCores allocated to the process).
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '8'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
with torch_neuron.experimental.multicore_context():
m0 = torch.jit.load('model-with-1-neuron-pipeline-cores.pt') # Loads replications to nc0-nc7
.. raw:: html
:file: images/6-multicore.svg
This addresses the last ``NEURONCORE_GROUP_SIZES`` issue of:
- **Manual Replication**: Since models can be automatically replicated to
multiple NeuronCores, this means that applications no longer need to implement
routing logic and perform multiple loads.
This API has a secondary benefit that the exact same loading logic can be used
on an ``inf1.xlarge`` or an ``inf1.6xlarge``. In either case, it will use all
of the NeuronCores that are visible to the process. This means that no special
logic needs to be coded for different instance types.
Example: Explicit Replication
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Replication is also possible with the
:func:`~torch_neuron.experimental.neuron_cores_context` API. The number of
replications is chosen by ``replications = floor(nc_count / cores_per_model)``.
**Environment Setup**:
.. code-block:: bash
export NEURON_RT_NUM_CORES = '8'
**Python Script**:
.. code-block:: python
import torch
import torch_neuron
with torch_neuron.experimental.neuron_cores_context(start_nc=2, nc_count=4):
m0 = torch.jit.load('model-with-2-neuron-pipeline-cores.pt') # Loads replications to nc2-nc5
.. raw:: html
:file: images/7-replication.svg
</pre></body></html>
|
2023-09-29T20:54:47.611Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.rst.txt
|
```
.. _torch-analyze-for-training-tutorial:
Analyze for Training Tutorial
==============================
This tutorial explains how to analyze a model for training support using via ``torch-neuronx``.
.. note::
For analyzing models for inference support via ``torch-neuronx``, please refer to :ref:`torch_neuronx.analyze() <torch_neuronx_analyze_api>`
Setup
-----
For this tutorial we'll be using two scripts: ``supported.py`` and ``unsupported.py``. Create these files by copy pasting the below code to their respective files.
``supported.py``
.. code:: ipython3
import torch
import torch_xla.core.xla_model as xm
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
def main():
device = xm.xla_device()
model = NN().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
loss_fn = torch.nn.MSELoss()
inp = torch.rand(4)
target = torch.tensor([1,0])
model.train()
for epoch in range(2):
optimizer.zero_grad()
inp = inp.to(device)
target = target.to(device)
output = model(inp)
loss = loss_fn(output,target)
loss.backward()
optimizer.step()
xm.mark_step()
if __name__ == '__main__':
main()
``unsupported.py``
.. code:: ipython3
import torch
import torch_xla.core.xla_model as xm
class UnsupportedModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
y = torch.fft.fft(x)
x = x + 10
return x * y
def main():
device = xm.xla_device()
model = UnsupportedModel().to(device)
inp = torch.rand(4)
model.train()
for epoch in range(1):
inp = inp.to(device)
output = model(inp)
xm.mark_step()
if __name__ == '__main__':
main()
Running ``analyze`` via ``neuron_parallel_compile``
---------------------------------------------------
To analyze a model, we supply the training script to the ``analyze`` command, which is shipped with ``neuron_parallel_compile``.
The command is:
.. code:: shell
neuron_parallel_compile --command analyze python supported.py
This will generate a lot of output showing a lot of compilation statuses.
Here's a snippet of the output when running the above command.
.. code:: shell
.2023-05-25 00:43:43.000394: 776642 INFO ||ANALYZE||: Compiling /tmp/model_analyis_graphs/compare_7841189860629745939_23.hlo.pb using following command: neuronx-cc compile --target=trn1 --framework XLA /tmp/model_analyis_graphs/compare_7841189860629745939_23.hlo.pb --verbose=35 --query-compute-placement
2023-05-25 00:43:43.000418: 776642 INFO ||ANALYZE||: Compiling /tmp/model_analyis_graphs/multiply_15640857564712679356_53.hlo.pb using following command: neuronx-cc compile --target=trn1 --framework XLA /tmp/model_analyis_graphs/multiply_15640857564712679356_53.hlo.pb --verbose=35 --query-compute-placement
.
Compiler status PASS
2023-05-25 00:43:43.000549: 776642 INFO ||ANALYZE||: Compiling /tmp/model_analyis_graphs/subtract_1927104012014828209_49.hlo.pb using following command: neuronx-cc compile --target=trn1 --framework XLA /tmp/model_analyis_graphs/subtract_1927104012014828209_49.hlo.pb --verbose=35 --query-compute-placement
...
Compiler status PASS
The analysis report will be generated as a JSON file.
The location of the report is shown as the last log entry:
.. code:: shell
2023-05-25 00:43:49.000252: 776642 INFO ||ANALYZE||: Removing existing report /home/ubuntu/analyze_for_training/model_analysis_result/result.json
2023-05-25 00:43:49.000252: 776642 INFO ||ANALYZE||: Model analysis completed. Report - /home/ubuntu/analyze_for_training/model_analysis_result/result.json
.. note::
Note that if a report is already present in the specified path, ``analyze`` will remove/overwrite it.
The report generated running the above command looks like:
.. code:: json
{
"torch_neuronx_version": "1.13.0.1.6.1",
"neuronx_cc_version": "2.5.0.28+1be23f232",
"support_percentage": "100.00%",
"supported_operators": {
"aten": {
"aten::permute": 8,
"aten::add": 8,
"aten::mul": 8,
"aten::expand": 18,
"aten::mm": 10,
"aten::mse_loss_backward": 12,
"aten::relu": 3,
"aten::threshold_backward": 4,
"aten::squeeze": 4,
"aten::view": 4,
"aten::pow": 2,
"aten::mse_loss": 2,
"aten::tanh": 2
}
},
"unsupported_operators": {
"aten": []
}
}
.. note::
Note that the ``torch_neuronx`` and ``neuronx_cc`` versions may be different from this example
Understanding ``analyze`` report for Unsupported Models
-------------------------------------------------------
Default Verbosity
~~~~~~~~~~~~~~~~~
Let's run ``analyze`` for ``unsupported.py``
.. code:: shell
neuron_parallel_compile --command analyze python unsupported.py
Here is the report generated by the above command:
.. code:: json
{
"torch_neuronx_version": "1.13.0.1.6.1",
"neuronx_cc_version": "2.5.0.28+1be23f232",
"support_percentage": "60.00%",
"supported_operators": {
"aten": {
"aten::add": 2,
"aten::mul": 1
}
},
"unsupported_operators": {
"aten": [
{
"kind": "aten::mul",
"failureAt": "neuronx-cc",
"call": "test2_unsup.py 24"
}
]
}
}
In the list of unsupported operators we are provided the specific aten op that failed, and where that operator is in the training script.
One thing to notice is that the ``support_percentage`` doesn't exactly add up. This is because the ``support_percentage`` is calculated based on the supported number of XLA/HLO instructions (explained more in the next section). To see the specific XLA/HLO op lowerings, use the flag ``--analyze-verbosity 1``, as the default is ``2``.
The last thing is that a specific aten operator can be supported and unsupported simultaneously. In our example, this can be seen with ``aten::mul``. This is due to the configuration of the aten op. The below section will describe what went wrong with the ``aten::mul`` op.
Lower Level Verbosity
~~~~~~~~~~~~~~~~~~~~~
Let's run again with lower verbosity level:
.. code:: shell
neuron_parallel_compile --command analyze --analyze-verbosity 1 python unsupported.py
The report looks like:
.. code:: json
{
"torch_neuronx_version": "1.13.0.1.6.1",
"neuronx_cc_version": "2.5.0.28+1be23f232",
"support_percentage": "60.00%",
"supported_operators": {
"aten": {
"aten::mul": 1,
"aten::add": 2
},
"xla": [
"f32[] multiply(f32[], f32[])",
"f32[4]{0} broadcast(f32[]), dimensions={}",
"f32[4]{0} add(f32[4]{0}, f32[4]{0})"
]
},
"unsupported_operators": {
"aten": [
{
"kind": "aten::mul",
"failureAt": "neuronx-cc",
"call": "test2_unsup.py 24"
}
],
"xla": [
{
"hlo_instruction": "c64[4]{0} convert(f32[4]{0})",
"aten_op": "aten::mul"
},
{
"hlo_instruction": "c64[4]{0} multiply(c64[4]{0}, c64[4]{0})",
"aten_op": "aten::mul"
}
]
}
}
This report provides both the aten operator and the failed XLA/HLO instructions. There will be more HLO instructions than aten ops since an aten op generally lowers to multiple HLO instructions. As a result, the ``support_percentage`` field doesn't exactly line up with the aten operator count, but does line up the XLA/HLO instruction count. This level of verbosity is intended for use when you have the ability to modify the model's HLO lowering, or generally have insight into the HLO lowering.
As mentioned before, the ``aten::mul`` op appears to be both supported and unsupported. This is because the compiler does not support a specific configuration of ``aten::mul``, which can be seen more clearly with the HLO lowering. In the above example, the ``aten::mul`` operator is unsupported since at least one parameter provided was a complex type (``C64``), which is unsupported by ``neuronx-cc``.
This concludes the tutorial. The API for ``analyze`` can be found within :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-analyze-for-training-tutorial:
Analyze for Training Tutorial
==============================
This tutorial explains how to analyze a model for training support using via ``torch-neuronx``.
.. note::
For analyzing models for inference support via ``torch-neuronx``, please refer to :ref:`torch_neuronx.analyze() <torch_neuronx_analyze_api>`
Setup
-----
For this tutorial we'll be using two scripts: ``supported.py`` and ``unsupported.py``. Create these files by copy pasting the below code to their respective files.
``supported.py``
.. code:: ipython3
import torch
import torch_xla.core.xla_model as xm
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
def main():
device = xm.xla_device()
model = NN().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
loss_fn = torch.nn.MSELoss()
inp = torch.rand(4)
target = torch.tensor([1,0])
model.train()
for epoch in range(2):
optimizer.zero_grad()
inp = inp.to(device)
target = target.to(device)
output = model(inp)
loss = loss_fn(output,target)
loss.backward()
optimizer.step()
xm.mark_step()
if __name__ == '__main__':
main()
``unsupported.py``
.. code:: ipython3
import torch
import torch_xla.core.xla_model as xm
class UnsupportedModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
y = torch.fft.fft(x)
x = x + 10
return x * y
def main():
device = xm.xla_device()
model = UnsupportedModel().to(device)
inp = torch.rand(4)
model.train()
for epoch in range(1):
inp = inp.to(device)
output = model(inp)
xm.mark_step()
if __name__ == '__main__':
main()
Running ``analyze`` via ``neuron_parallel_compile``
---------------------------------------------------
To analyze a model, we supply the training script to the ``analyze`` command, which is shipped with ``neuron_parallel_compile``.
The command is:
.. code:: shell
neuron_parallel_compile --command analyze python supported.py
This will generate a lot of output showing a lot of compilation statuses.
Here's a snippet of the output when running the above command.
.. code:: shell
.2023-05-25 00:43:43.000394: 776642 INFO ||ANALYZE||: Compiling /tmp/model_analyis_graphs/compare_7841189860629745939_23.hlo.pb using following command: neuronx-cc compile --target=trn1 --framework XLA /tmp/model_analyis_graphs/compare_7841189860629745939_23.hlo.pb --verbose=35 --query-compute-placement
2023-05-25 00:43:43.000418: 776642 INFO ||ANALYZE||: Compiling /tmp/model_analyis_graphs/multiply_15640857564712679356_53.hlo.pb using following command: neuronx-cc compile --target=trn1 --framework XLA /tmp/model_analyis_graphs/multiply_15640857564712679356_53.hlo.pb --verbose=35 --query-compute-placement
.
Compiler status PASS
2023-05-25 00:43:43.000549: 776642 INFO ||ANALYZE||: Compiling /tmp/model_analyis_graphs/subtract_1927104012014828209_49.hlo.pb using following command: neuronx-cc compile --target=trn1 --framework XLA /tmp/model_analyis_graphs/subtract_1927104012014828209_49.hlo.pb --verbose=35 --query-compute-placement
...
Compiler status PASS
The analysis report will be generated as a JSON file.
The location of the report is shown as the last log entry:
.. code:: shell
2023-05-25 00:43:49.000252: 776642 INFO ||ANALYZE||: Removing existing report /home/ubuntu/analyze_for_training/model_analysis_result/result.json
2023-05-25 00:43:49.000252: 776642 INFO ||ANALYZE||: Model analysis completed. Report - /home/ubuntu/analyze_for_training/model_analysis_result/result.json
.. note::
Note that if a report is already present in the specified path, ``analyze`` will remove/overwrite it.
The report generated running the above command looks like:
.. code:: json
{
"torch_neuronx_version": "1.13.0.1.6.1",
"neuronx_cc_version": "2.5.0.28+1be23f232",
"support_percentage": "100.00%",
"supported_operators": {
"aten": {
"aten::permute": 8,
"aten::add": 8,
"aten::mul": 8,
"aten::expand": 18,
"aten::mm": 10,
"aten::mse_loss_backward": 12,
"aten::relu": 3,
"aten::threshold_backward": 4,
"aten::squeeze": 4,
"aten::view": 4,
"aten::pow": 2,
"aten::mse_loss": 2,
"aten::tanh": 2
}
},
"unsupported_operators": {
"aten": []
}
}
.. note::
Note that the ``torch_neuronx`` and ``neuronx_cc`` versions may be different from this example
Understanding ``analyze`` report for Unsupported Models
-------------------------------------------------------
Default Verbosity
~~~~~~~~~~~~~~~~~
Let's run ``analyze`` for ``unsupported.py``
.. code:: shell
neuron_parallel_compile --command analyze python unsupported.py
Here is the report generated by the above command:
.. code:: json
{
"torch_neuronx_version": "1.13.0.1.6.1",
"neuronx_cc_version": "2.5.0.28+1be23f232",
"support_percentage": "60.00%",
"supported_operators": {
"aten": {
"aten::add": 2,
"aten::mul": 1
}
},
"unsupported_operators": {
"aten": [
{
"kind": "aten::mul",
"failureAt": "neuronx-cc",
"call": "test2_unsup.py 24"
}
]
}
}
In the list of unsupported operators we are provided the specific aten op that failed, and where that operator is in the training script.
One thing to notice is that the ``support_percentage`` doesn't exactly add up. This is because the ``support_percentage`` is calculated based on the supported number of XLA/HLO instructions (explained more in the next section). To see the specific XLA/HLO op lowerings, use the flag ``--analyze-verbosity 1``, as the default is ``2``.
The last thing is that a specific aten operator can be supported and unsupported simultaneously. In our example, this can be seen with ``aten::mul``. This is due to the configuration of the aten op. The below section will describe what went wrong with the ``aten::mul`` op.
Lower Level Verbosity
~~~~~~~~~~~~~~~~~~~~~
Let's run again with lower verbosity level:
.. code:: shell
neuron_parallel_compile --command analyze --analyze-verbosity 1 python unsupported.py
The report looks like:
.. code:: json
{
"torch_neuronx_version": "1.13.0.1.6.1",
"neuronx_cc_version": "2.5.0.28+1be23f232",
"support_percentage": "60.00%",
"supported_operators": {
"aten": {
"aten::mul": 1,
"aten::add": 2
},
"xla": [
"f32[] multiply(f32[], f32[])",
"f32[4]{0} broadcast(f32[]), dimensions={}",
"f32[4]{0} add(f32[4]{0}, f32[4]{0})"
]
},
"unsupported_operators": {
"aten": [
{
"kind": "aten::mul",
"failureAt": "neuronx-cc",
"call": "test2_unsup.py 24"
}
],
"xla": [
{
"hlo_instruction": "c64[4]{0} convert(f32[4]{0})",
"aten_op": "aten::mul"
},
{
"hlo_instruction": "c64[4]{0} multiply(c64[4]{0}, c64[4]{0})",
"aten_op": "aten::mul"
}
]
}
}
This report provides both the aten operator and the failed XLA/HLO instructions. There will be more HLO instructions than aten ops since an aten op generally lowers to multiple HLO instructions. As a result, the ``support_percentage`` field doesn't exactly line up with the aten operator count, but does line up the XLA/HLO instruction count. This level of verbosity is intended for use when you have the ability to modify the model's HLO lowering, or generally have insight into the HLO lowering.
As mentioned before, the ``aten::mul`` op appears to be both supported and unsupported. This is because the compiler does not support a specific configuration of ``aten::mul``, which can be seen more clearly with the HLO lowering. In the above example, the ``aten::mul`` operator is unsupported since at least one parameter provided was a complex type (``C64``), which is unsupported by ``neuronx-cc``.
This concludes the tutorial. The API for ``analyze`` can be found within :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`
</pre></body></html>
|
2023-09-29T20:54:47.658Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/tutorials/customop-mlp-training.rst.txt
|
```
.. _neuronx-customop-mlp-tutorial:
Neuron Custom C++ Operators in MLP Training
===========================================
In this tutorial we’ll demonstrate how to prepare a PyTorch model that contains a custom operator (ie. CppExtension) for Neuron compilation to run on Trainium EC2 instances. To learn more about Neuron CustomOps see :ref:`neuron_c++customops`. For a deeper dive on MNIST or Multi-Layer Perceptron models, see the :ref:`neuronx-mlp-training-tutorial`. This tutorial assumes the reader is familiar with `PyTorch Custom Extensions <https://pytorch.org/tutorials/advanced/cpp_extension.html>`_.
.. contents:: Table of Contents
:local:
:depth: 2
Setup Environment and Download Examples
---------------------------------------
Before running the tutorial please follow the installation instructions at:
* :ref:`pytorch-neuronx-install` on Trn1
.. note::
The name of ``aws-neuronx-gpsimd-customop`` has been changed to ``aws-neuronx-gpsimd-customop-lib`` as of the neuron 2.10 release.
.. note::
Custom C++ Operators are supported as of Neuron SDK Version 2.7 as a beta feature. As such this feature is not installed by default. Additional tooling and library packages (RPM and DEB) are required. On AL2, they can be installed with the following commands:
::
sudo yum remove python3-devel -y
sudo yum remove aws-neuronx-gpsimd-tools-0.* -y
sudo yum remove aws-neuronx-gpsimd-customop-lib-0.* -y
sudo yum install python3-devel -y
sudo yum install aws-neuronx-gpsimd-tools-0.* -y
sudo yum install aws-neuronx-gpsimd-customop-lib-0.* -y
On Ubuntu, they can be installed with the following commands:
::
sudo apt-get remove python3-dev -y
sudo apt-get remove aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get remove aws-neuronx-gpsimd-customop-lib=0.* -y
sudo apt-get install python3-dev -y
sudo apt-get install aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get install aws-neuronx-gpsimd-customop-lib=0.* -y
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
Install dependencies for PyTorch Custom Extensions in your environment by running:
.. code:: bash
pip install regex
pip install ninja
The ``ninja`` package is only needed for the reference CPU example. It is not needed by Neuron to run on Trainium instances.
To download the source code for this tutorial, do:
.. code:: bash
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/training/customop_mlp
In the ``customop_mlp`` directory there are two subdirectories. The ``pytorch`` directory contains an example model and training script using a custom operator that runs using the cpu device with standard PyTorch APIs and libraries (ie. not specific to AWS/Neuron). The ``neuron`` directory contains a version of the same model and training script with the custom operator ported to Neuron to run on trn1 using the XLA device.
Basic PyTorch Custom Relu Operator
----------------------------------
For the next few sections we’ll review the example model in the ``pytorch`` directory. This is a condensed and simplified explanation of PyTorch C++ Extensions, for more details see the `PyTorch documentation <https://pytorch.org/tutorials/advanced/cpp_extension.html>`_. In ``my_ops.py`` we implement a custom relu activation op as a torch autograd function so that we can use it in a training loop:
.. code-block:: python
import torch
torch.ops.load_library('librelu.so')
class Relu(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return torch.ops.my_ops.relu_forward(input)
@staticmethod
def backward(ctx, grad):
input, = ctx.saved_tensors
return torch.ops.my_ops.relu_backward(grad, input), None
Notice that here we first load ``librelu.so`` using the ``load_library`` API. And then call the ``relu_forward`` and ``relu_backward`` functions from our library within the relevant static methods.
We implemented these two library functions in the ``relu.cpp`` file:
.. code-block:: c++
torch::Tensor relu_forward(const torch::Tensor& t_in) {
...
t_out_acc[i][j] = t_in_acc[i][j] > 0.0 ? t_in_acc[i][j] : 0.0;
...
}
torch::Tensor relu_backward(const torch::Tensor& t_grad, const torch::Tensor& t_in) {
...
t_out_acc[i][j] = t_in_acc[i][j] > 0.0 ? t_grad_acc[i][j] : 0.0;
...
}
TORCH_LIBRARY(my_ops, m) {
m.def("relu_forward", &relu_forward);
m.def("relu_backward", &relu_backward);
}
And then built them into a library using the PyTorch Cpp Extension APIs in the ``build.py`` script:
.. code-block:: python
torch.utils.cpp_extension.load(
name='librelu',
sources=['relu.cpp'],
is_python_module=False,
build_directory=os.getcwd()
)
Run ``python build.py`` to produce the ``librelu.so`` library.
Multi-layer perceptron MNIST model
----------------------------------
In ``model.py``, we define the multi-layer perceptron (MLP) MNIST model with 3 linear layers and a custom ReLU activation, followed by a log-softmax layer. Highlighted below are the relevant custom changes in the ``model.py`` file:
.. code-block:: python
:emphasize-lines: 4, 16, 18
import torch
import torch.nn as nn
from torch.nn import functional as F
import my_ops
# Declare 3-layer MLP for MNIST dataset
class MLP(nn.Module):
def __init__(self, input_size = 28 * 28, output_size = 10, layers = [120, 84]):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, layers[0])
self.fc2 = nn.Linear(layers[0], layers[1])
self.fc3 = nn.Linear(layers[1], output_size)
def forward(self, x):
f1 = self.fc1(x)
r1 = my_ops.Relu.apply(f1)
f2 = self.fc2(r1)
r2 = my_ops.Relu.apply(f2)
f3 = self.fc3(r2)
return torch.log_softmax(f3, dim=1)
Training the MLP model on CPU
-----------------------------
In the ``train_cpu.py`` script we load the MNIST train dataset, instantiate the MLP model, and use ``device='cpu'`` to execute on the host CPU. Expected CPU output:
.. code:: bash
----------Training ---------------
Train throughput *(*iter/sec*)*: *286*.96994718801335
Final loss is *0*.1040
----------End Training ---------------
Neuron Relu CustomOp
--------------------
Now switch over into the ``neuron`` directory. To migrate our PyTorch customOp to Neuron, we have to make a few small changes. First, we create a new ``shape.cpp`` file to implement our shape function as required by XLA (see :ref:`feature-custom-operators-devguide` for details). We also replace the ``TORCH_LIBRARY`` API with ``NEURON_LIBRARY``.
.. code-block:: c++
torch::Tensor relu_fwd_shape(torch::Tensor t_in) {
torch::Tensor t_out = torch::zeros(t_in.sizes(), torch::kFloat);
return t_out;
}
torch::Tensor relu_bwd_shape(torch::Tensor t_grad, torch::Tensor t_in) {
torch::Tensor t_out = torch::zeros(t_in.sizes(), torch::kFloat);
return t_out;
}
NEURON_LIBRARY(my_ops, m) {
m.def("relu_forward", &relu_fwd_shape, "relu_forward");
m.def("relu_backward", &relu_bwd_shape, "relu_backward");
}
And then we build it using the ``torch_neuronx`` package in ``build.py``:
.. code-block:: python
from torch_neuronx.xla_impl import custom_op
custom_op.load(
name='relu',
compute_srcs=['relu.cpp'],
shape_srcs=['shape.cpp'],
build_directory=os.getcwd()
)
Notice that here we specify both the ``relu.cpp`` and ``shape.cpp`` files separately. This is because the shape functions will be compiled with an x86 compiler and run on the host during the XLA compilation, and the compute functions will be compiled for the NeuronCore device and executed during the training loop. Running ``build.py`` produces the same ``librelu.so`` as in the CPU example, but compiles the source code to execute on the NeuronCore.
In our ``my_ops.py`` file we just use the ``torch_neuronx`` API to load our new library and execute our customOp exactly the same way we did before:
.. code-block:: python
import torch
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load_library('librelu.so')
class Relu(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return torch.ops.my_ops.relu_forward(input)
@staticmethod
def backward(ctx, grad):
input, = ctx.saved_tensors
return torch.ops.my_ops.relu_backward(grad, input), None
Training the MLP model on Trainium
----------------------------------
In the ``train.py`` script we modify the CPU training script ``train_cpu.py`` to run with PyTorch Neuron torch_xla. Expected output on a trn1 instance:
.. code:: bash
----------Training ---------------
2023-02-02 22 (tel:2023020222):46:58.000299: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-2.0.0.8683a0+c94c3936c/MODULE_4447837791278761679/MODULE_0_SyncTensorsGraph.329_4447837791278761679_ip-172-31-38-167.us-west-2.compute.internal-49ad7ade-14011-5f3bf523d8788/1650ba41-bcfd-4d15-9038-16d391c4a57c/MODULE_0_SyncTensorsGraph.329_4447837791278761679_ip-172-31-38-167.us-west-2.compute.internal-49ad7ade-14011-5f3bf523d8788.neff. Exiting with a successfully compiled graph
2023-02-02 22 (tel:2023020222):46:58.000433: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-2.0.0.8683a0+c94c3936c/MODULE_16964505026440903899/MODULE_1_SyncTensorsGraph.401_16964505026440903899_ip-172-31-38-167.us-west-2.compute.internal-4d0cabba-14011-5f3bf529794a3/23d74230-59dd-4347-b247-fa98aed416bd/MODULE_1_SyncTensorsGraph.401_16964505026440903899_ip-172-31-38-167.us-west-2.compute.internal-4d0cabba-14011-5f3bf529794a3.neff. Exiting with a successfully compiled graph
Train throughput (iter/sec): 117.47151142662648
Final loss is 0.1970
----------End Training ---------------
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-customop-mlp-tutorial:
Neuron Custom C++ Operators in MLP Training
===========================================
In this tutorial we’ll demonstrate how to prepare a PyTorch model that contains a custom operator (ie. CppExtension) for Neuron compilation to run on Trainium EC2 instances. To learn more about Neuron CustomOps see :ref:`neuron_c++customops`. For a deeper dive on MNIST or Multi-Layer Perceptron models, see the :ref:`neuronx-mlp-training-tutorial`. This tutorial assumes the reader is familiar with `PyTorch Custom Extensions <https://pytorch.org/tutorials/advanced/cpp_extension.html>`_.
.. contents:: Table of Contents
:local:
:depth: 2
Setup Environment and Download Examples
---------------------------------------
Before running the tutorial please follow the installation instructions at:
* :ref:`pytorch-neuronx-install` on Trn1
.. note::
The name of ``aws-neuronx-gpsimd-customop`` has been changed to ``aws-neuronx-gpsimd-customop-lib`` as of the neuron 2.10 release.
.. note::
Custom C++ Operators are supported as of Neuron SDK Version 2.7 as a beta feature. As such this feature is not installed by default. Additional tooling and library packages (RPM and DEB) are required. On AL2, they can be installed with the following commands:
::
sudo yum remove python3-devel -y
sudo yum remove aws-neuronx-gpsimd-tools-0.* -y
sudo yum remove aws-neuronx-gpsimd-customop-lib-0.* -y
sudo yum install python3-devel -y
sudo yum install aws-neuronx-gpsimd-tools-0.* -y
sudo yum install aws-neuronx-gpsimd-customop-lib-0.* -y
On Ubuntu, they can be installed with the following commands:
::
sudo apt-get remove python3-dev -y
sudo apt-get remove aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get remove aws-neuronx-gpsimd-customop-lib=0.* -y
sudo apt-get install python3-dev -y
sudo apt-get install aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get install aws-neuronx-gpsimd-customop-lib=0.* -y
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
Install dependencies for PyTorch Custom Extensions in your environment by running:
.. code:: bash
pip install regex
pip install ninja
The ``ninja`` package is only needed for the reference CPU example. It is not needed by Neuron to run on Trainium instances.
To download the source code for this tutorial, do:
.. code:: bash
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/training/customop_mlp
In the ``customop_mlp`` directory there are two subdirectories. The ``pytorch`` directory contains an example model and training script using a custom operator that runs using the cpu device with standard PyTorch APIs and libraries (ie. not specific to AWS/Neuron). The ``neuron`` directory contains a version of the same model and training script with the custom operator ported to Neuron to run on trn1 using the XLA device.
Basic PyTorch Custom Relu Operator
----------------------------------
For the next few sections we’ll review the example model in the ``pytorch`` directory. This is a condensed and simplified explanation of PyTorch C++ Extensions, for more details see the `PyTorch documentation <https://pytorch.org/tutorials/advanced/cpp_extension.html>`_. In ``my_ops.py`` we implement a custom relu activation op as a torch autograd function so that we can use it in a training loop:
.. code-block:: python
import torch
torch.ops.load_library('librelu.so')
class Relu(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return torch.ops.my_ops.relu_forward(input)
@staticmethod
def backward(ctx, grad):
input, = ctx.saved_tensors
return torch.ops.my_ops.relu_backward(grad, input), None
Notice that here we first load ``librelu.so`` using the ``load_library`` API. And then call the ``relu_forward`` and ``relu_backward`` functions from our library within the relevant static methods.
We implemented these two library functions in the ``relu.cpp`` file:
.. code-block:: c++
torch::Tensor relu_forward(const torch::Tensor& t_in) {
...
t_out_acc[i][j] = t_in_acc[i][j] > 0.0 ? t_in_acc[i][j] : 0.0;
...
}
torch::Tensor relu_backward(const torch::Tensor& t_grad, const torch::Tensor& t_in) {
...
t_out_acc[i][j] = t_in_acc[i][j] > 0.0 ? t_grad_acc[i][j] : 0.0;
...
}
TORCH_LIBRARY(my_ops, m) {
m.def("relu_forward", &relu_forward);
m.def("relu_backward", &relu_backward);
}
And then built them into a library using the PyTorch Cpp Extension APIs in the ``build.py`` script:
.. code-block:: python
torch.utils.cpp_extension.load(
name='librelu',
sources=['relu.cpp'],
is_python_module=False,
build_directory=os.getcwd()
)
Run ``python build.py`` to produce the ``librelu.so`` library.
Multi-layer perceptron MNIST model
----------------------------------
In ``model.py``, we define the multi-layer perceptron (MLP) MNIST model with 3 linear layers and a custom ReLU activation, followed by a log-softmax layer. Highlighted below are the relevant custom changes in the ``model.py`` file:
.. code-block:: python
:emphasize-lines: 4, 16, 18
import torch
import torch.nn as nn
from torch.nn import functional as F
import my_ops
# Declare 3-layer MLP for MNIST dataset
class MLP(nn.Module):
def __init__(self, input_size = 28 * 28, output_size = 10, layers = [120, 84]):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, layers[0])
self.fc2 = nn.Linear(layers[0], layers[1])
self.fc3 = nn.Linear(layers[1], output_size)
def forward(self, x):
f1 = self.fc1(x)
r1 = my_ops.Relu.apply(f1)
f2 = self.fc2(r1)
r2 = my_ops.Relu.apply(f2)
f3 = self.fc3(r2)
return torch.log_softmax(f3, dim=1)
Training the MLP model on CPU
-----------------------------
In the ``train_cpu.py`` script we load the MNIST train dataset, instantiate the MLP model, and use ``device='cpu'`` to execute on the host CPU. Expected CPU output:
.. code:: bash
----------Training ---------------
Train throughput *(*iter/sec*)*: *286*.96994718801335
Final loss is *0*.1040
----------End Training ---------------
Neuron Relu CustomOp
--------------------
Now switch over into the ``neuron`` directory. To migrate our PyTorch customOp to Neuron, we have to make a few small changes. First, we create a new ``shape.cpp`` file to implement our shape function as required by XLA (see :ref:`feature-custom-operators-devguide` for details). We also replace the ``TORCH_LIBRARY`` API with ``NEURON_LIBRARY``.
.. code-block:: c++
torch::Tensor relu_fwd_shape(torch::Tensor t_in) {
torch::Tensor t_out = torch::zeros(t_in.sizes(), torch::kFloat);
return t_out;
}
torch::Tensor relu_bwd_shape(torch::Tensor t_grad, torch::Tensor t_in) {
torch::Tensor t_out = torch::zeros(t_in.sizes(), torch::kFloat);
return t_out;
}
NEURON_LIBRARY(my_ops, m) {
m.def("relu_forward", &relu_fwd_shape, "relu_forward");
m.def("relu_backward", &relu_bwd_shape, "relu_backward");
}
And then we build it using the ``torch_neuronx`` package in ``build.py``:
.. code-block:: python
from torch_neuronx.xla_impl import custom_op
custom_op.load(
name='relu',
compute_srcs=['relu.cpp'],
shape_srcs=['shape.cpp'],
build_directory=os.getcwd()
)
Notice that here we specify both the ``relu.cpp`` and ``shape.cpp`` files separately. This is because the shape functions will be compiled with an x86 compiler and run on the host during the XLA compilation, and the compute functions will be compiled for the NeuronCore device and executed during the training loop. Running ``build.py`` produces the same ``librelu.so`` as in the CPU example, but compiles the source code to execute on the NeuronCore.
In our ``my_ops.py`` file we just use the ``torch_neuronx`` API to load our new library and execute our customOp exactly the same way we did before:
.. code-block:: python
import torch
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load_library('librelu.so')
class Relu(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
ctx.save_for_backward(input)
return torch.ops.my_ops.relu_forward(input)
@staticmethod
def backward(ctx, grad):
input, = ctx.saved_tensors
return torch.ops.my_ops.relu_backward(grad, input), None
Training the MLP model on Trainium
----------------------------------
In the ``train.py`` script we modify the CPU training script ``train_cpu.py`` to run with PyTorch Neuron torch_xla. Expected output on a trn1 instance:
.. code:: bash
----------Training ---------------
2023-02-02 22 (tel:2023020222):46:58.000299: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-2.0.0.8683a0+c94c3936c/MODULE_4447837791278761679/MODULE_0_SyncTensorsGraph.329_4447837791278761679_ip-172-31-38-167.us-west-2.compute.internal-49ad7ade-14011-5f3bf523d8788/1650ba41-bcfd-4d15-9038-16d391c4a57c/MODULE_0_SyncTensorsGraph.329_4447837791278761679_ip-172-31-38-167.us-west-2.compute.internal-49ad7ade-14011-5f3bf523d8788.neff. Exiting with a successfully compiled graph
2023-02-02 22 (tel:2023020222):46:58.000433: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-2.0.0.8683a0+c94c3936c/MODULE_16964505026440903899/MODULE_1_SyncTensorsGraph.401_16964505026440903899_ip-172-31-38-167.us-west-2.compute.internal-4d0cabba-14011-5f3bf529794a3/23d74230-59dd-4347-b247-fa98aed416bd/MODULE_1_SyncTensorsGraph.401_16964505026440903899_ip-172-31-38-167.us-west-2.compute.internal-4d0cabba-14011-5f3bf529794a3.neff. Exiting with a successfully compiled graph
Train throughput (iter/sec): 117.47151142662648
Final loss is 0.1970
----------End Training ---------------
</pre></body></html>
|
2023-09-29T20:54:47.667Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.rst.txt
|
```modeling
.. _zero1-gpt2-pretraining-tutorial:
ZeRO-1 Tutorial
===============
What is ZeRO-1?
---------------
ZeRO-1 (Zero Redundancy Optimizer Stage 1,
https://arxiv.org/abs/1910.02054) is an optimization technique for
large-scale deep learning models. It is a memory efficient variation of
data parallelism. ZeRO leverages the aggregate computation and memory
resources of data parallelism to reduce the memory and compute
requirements of each accelerator used for model training. ZeRO reduces
the memory consumption of each accelerator by partitioning the various
model training states (weights, gradients, and optimizer states) across
the available devices in the distributed training hardware. ZeRO is
being implemented as incremental stages of optimizations. In stage 1,
the optimizer states (e.g., for Adam optimizer, 32-bit weights, and the
first, and second moment estimates) are partitioned across the
processes, so that each process updates only its partition.
.. image:: zero1.jpg
:alt: Image: zero1.jpg
We implemented an XLA-friendly version of ZeRO-1 and it has
been merged in open-source PyTorch/XLA project. Users can use it to
enable ZeRO-1 algorithm by simply wrapping the origin optimizer as shown
below.
::
# Before:
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
# After
optimizer = ZeroRedundancyOptimizer(model.parameters(), torch.optim.Adam, lr=0.0001)
Then just call ``optimizer.step()`` directly, the wrapped optimizer will
handle the distributed operations automatically.
The above code snippet illustrates the basic usage. Generally, users can
use ZeRO-1 optimizer like a normal optimizer. In addition,
``ZeroRedundancyOptimizer`` also provides other features: enable
gradient clipping or use other data type for wrapped optimizer. Note
that though the most of optimizers can be used with ZeRO-1, optimizers
that compute norm for parameters (e.g. LAMB) might lead to accuracy
disparities compared to using original local optimizer when using
ZeRO-1, because these optimizers cannot get full parameters but shards.
Usage
-----
To enable ZeRO-1 optimizer, just import it and replace origin optimizer
with ZeRO-1 wrapped version
::
from torch_xla.distributed.zero_redundancy_optimizer import ZeroRedundancyOptimizer
...
...
device = xm.xla_device()
model = model.to(device)
optimizer = ZeroRedundancyOptimizer(model.parameters(), AdamW, lr=0.001)
Then in training loop, just call ``optimizer.step()`` , note that we
should not use ``xm.reduce_gradients()`` or ``xm.optimizer_step()`` as
gradient reduction will be handle by ZeRO-1.
::
...
loss.backward()
xm**.**mark_step**()**
optimizer.step()
xm**.**mark_step**()**
ZeRO-1 optimizer also provides some additional features, user can pass
these arguments to the wrapper constructor:
- change ``optimizer_dtype`` to choose data used by optimizer, default
is ``torch.float32``
- change ``grad_clipping`` to enable grad clipping, default is true
- change ``max_norm`` to determine the maximum norm value used by grad
clipping, default is 1.0
GPT2-XL Pretraining Tutorial
----------------------------
Table of Contents:
- Setup
- Dataset
- Training
--------------
Setup
~~~~~
We use single Trn1.32xlarge instance. Follow :ref:`Install PyTorch Neuron on
Trn1 <setup-torch-neuronx>` to setup the environment first. For all the commands below, make sure
you are in the virtual environment that you have created above before
you run the commands:
**requirements.txt:** We pin the following Hugging Face Library versions
necessary for the tutorial
::
transformers==4.27.3
accelerate==0.17
datasets==2.10.1
tensorboard==2.12.2
::
source ~/aws_neuron_venv_pytorch/bin/activate
::
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/training/zero1_gpt2
python3 -m pip install -r requirements.txt
The specific files you need for this tutorial:
- config_1p5B_gpt2.json: The model configuration used in the tutorial
for GPT 2.7B Neo
- neuron_utils.py: includes utility functions and the logging tools
- run_clm_no_trainer.py: the main training script that runs the actual
training
- run_clm.sh: the shell script to launch the training job
Dataset
~~~~~~~
For the dataset, we use the wikitext dataset, specifically
``wikitext-103-raw-v1,`` provided by the HuggingFace
https://huggingface.co/datasets/wikitext. The data will be preprocessed
the first time running through the training script and then preprocessed
data will be cached in the HuggingFace cache directory for any future
training runs.
If the main process downloads the dataset, tokenizes the data and groups
them together successfully, the expected output would be as below at the
beginning of the training.
::
***** Running training *****
Num examples = 114248
Num Epochs = 29
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 100000
Training
~~~~~~~~
The GPT2 python fine-tuning script is adapted from the example
`run_clm_no_trainer.py <https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py>`__
in
https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling.
It incorporates the Accelerate
https://github.com/huggingface/accelerate. Given its experimental stage,
some modifications are needed, along with the bridge code to XLA.
Particularly, some workarounds to support Accelerate for the training
script are listed in "Known Issues Workarounds and Limitations" below.
In this example, we use GPT2-xl as example, and show the training steps
with mixed precision (bfloat16 and float32)
- single node training:
::
# compile graphs
neuron_parallel_compile bash run_clm.sh MIXED wikitext-103-raw-v1
bash run_clm.sh MIXED wikitext-103-raw-v1
- multi-node training, run:
::
sbatch run_clm_compile.slurm
then
::
sbatch run_clm.slurm
Known Issues, **Work-arounds and Limitations**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Activation checkpointing + Custom FAL Dropout: We have implemented a
version of dropout that caches the masks obtained during the first
forward pass which will be reused again during the forward pass when
activation checkpointing is enabled. All the scripts have the
following flag turned on: export NEURON_ENABLE_NOSEED_DROPOUT=1.
2. Error message: ``ValueError: invalid literal for int() with base 10: ''``.
Simply re-run the script can solve this issue. This issue is already solved
in the newer versions of transformers, see https://github.com/huggingface/transformers/pull/22427.
3. Accelerator API workarounds:
- Error message: "Gradient accumulation is not supported on TPU.
Please set gradient_accumulation_steps to 1 and don’t pass in a
GradientAccumulationPlugin object." More context here:
https://github.com/huggingface/accelerate/pull/479. The training
still works by commenting out the assertion and avoid using the
accumulation wrapper with accelerator.accumulate(model)
- Accelerator.prepare call: We have noticed that using the optimizer
returned by this API are not directly reusable. It is due to gaps
in configuring accelerate API for XLA devices.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _zero1-gpt2-pretraining-tutorial:
ZeRO-1 Tutorial
===============
What is ZeRO-1?
---------------
ZeRO-1 (Zero Redundancy Optimizer Stage 1,
https://arxiv.org/abs/1910.02054) is an optimization technique for
large-scale deep learning models. It is a memory efficient variation of
data parallelism. ZeRO leverages the aggregate computation and memory
resources of data parallelism to reduce the memory and compute
requirements of each accelerator used for model training. ZeRO reduces
the memory consumption of each accelerator by partitioning the various
model training states (weights, gradients, and optimizer states) across
the available devices in the distributed training hardware. ZeRO is
being implemented as incremental stages of optimizations. In stage 1,
the optimizer states (e.g., for Adam optimizer, 32-bit weights, and the
first, and second moment estimates) are partitioned across the
processes, so that each process updates only its partition.
.. image:: zero1.jpg
:alt: Image: zero1.jpg
We implemented an XLA-friendly version of ZeRO-1 and it has
been merged in open-source PyTorch/XLA project. Users can use it to
enable ZeRO-1 algorithm by simply wrapping the origin optimizer as shown
below.
::
# Before:
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
# After
optimizer = ZeroRedundancyOptimizer(model.parameters(), torch.optim.Adam, lr=0.0001)
Then just call ``optimizer.step()`` directly, the wrapped optimizer will
handle the distributed operations automatically.
The above code snippet illustrates the basic usage. Generally, users can
use ZeRO-1 optimizer like a normal optimizer. In addition,
``ZeroRedundancyOptimizer`` also provides other features: enable
gradient clipping or use other data type for wrapped optimizer. Note
that though the most of optimizers can be used with ZeRO-1, optimizers
that compute norm for parameters (e.g. LAMB) might lead to accuracy
disparities compared to using original local optimizer when using
ZeRO-1, because these optimizers cannot get full parameters but shards.
Usage
-----
To enable ZeRO-1 optimizer, just import it and replace origin optimizer
with ZeRO-1 wrapped version
::
from torch_xla.distributed.zero_redundancy_optimizer import ZeroRedundancyOptimizer
...
...
device = xm.xla_device()
model = model.to(device)
optimizer = ZeroRedundancyOptimizer(model.parameters(), AdamW, lr=0.001)
Then in training loop, just call ``optimizer.step()`` , note that we
should not use ``xm.reduce_gradients()`` or ``xm.optimizer_step()`` as
gradient reduction will be handle by ZeRO-1.
::
...
loss.backward()
xm**.**mark_step**()**
optimizer.step()
xm**.**mark_step**()**
ZeRO-1 optimizer also provides some additional features, user can pass
these arguments to the wrapper constructor:
- change ``optimizer_dtype`` to choose data used by optimizer, default
is ``torch.float32``
- change ``grad_clipping`` to enable grad clipping, default is true
- change ``max_norm`` to determine the maximum norm value used by grad
clipping, default is 1.0
GPT2-XL Pretraining Tutorial
----------------------------
Table of Contents:
- Setup
- Dataset
- Training
--------------
Setup
~~~~~
We use single Trn1.32xlarge instance. Follow :ref:`Install PyTorch Neuron on
Trn1 <setup-torch-neuronx>` to setup the environment first. For all the commands below, make sure
you are in the virtual environment that you have created above before
you run the commands:
**requirements.txt:** We pin the following Hugging Face Library versions
necessary for the tutorial
::
transformers==4.27.3
accelerate==0.17
datasets==2.10.1
tensorboard==2.12.2
::
source ~/aws_neuron_venv_pytorch/bin/activate
::
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/training/zero1_gpt2
python3 -m pip install -r requirements.txt
The specific files you need for this tutorial:
- config_1p5B_gpt2.json: The model configuration used in the tutorial
for GPT 2.7B Neo
- neuron_utils.py: includes utility functions and the logging tools
- run_clm_no_trainer.py: the main training script that runs the actual
training
- run_clm.sh: the shell script to launch the training job
Dataset
~~~~~~~
For the dataset, we use the wikitext dataset, specifically
``wikitext-103-raw-v1,`` provided by the HuggingFace
https://huggingface.co/datasets/wikitext. The data will be preprocessed
the first time running through the training script and then preprocessed
data will be cached in the HuggingFace cache directory for any future
training runs.
If the main process downloads the dataset, tokenizes the data and groups
them together successfully, the expected output would be as below at the
beginning of the training.
::
***** Running training *****
Num examples = 114248
Num Epochs = 29
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 100000
Training
~~~~~~~~
The GPT2 python fine-tuning script is adapted from the example
`run_clm_no_trainer.py <https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm_no_trainer.py>`__
in
https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling.
It incorporates the Accelerate
https://github.com/huggingface/accelerate. Given its experimental stage,
some modifications are needed, along with the bridge code to XLA.
Particularly, some workarounds to support Accelerate for the training
script are listed in "Known Issues Workarounds and Limitations" below.
In this example, we use GPT2-xl as example, and show the training steps
with mixed precision (bfloat16 and float32)
- single node training:
::
# compile graphs
neuron_parallel_compile bash run_clm.sh MIXED wikitext-103-raw-v1
bash run_clm.sh MIXED wikitext-103-raw-v1
- multi-node training, run:
::
sbatch run_clm_compile.slurm
then
::
sbatch run_clm.slurm
Known Issues, **Work-arounds and Limitations**
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Activation checkpointing + Custom FAL Dropout: We have implemented a
version of dropout that caches the masks obtained during the first
forward pass which will be reused again during the forward pass when
activation checkpointing is enabled. All the scripts have the
following flag turned on: export NEURON_ENABLE_NOSEED_DROPOUT=1.
2. Error message: ``ValueError: invalid literal for int() with base 10: ''``.
Simply re-run the script can solve this issue. This issue is already solved
in the newer versions of transformers, see https://github.com/huggingface/transformers/pull/22427.
3. Accelerator API workarounds:
- Error message: "Gradient accumulation is not supported on TPU.
Please set gradient_accumulation_steps to 1 and don’t pass in a
GradientAccumulationPlugin object." More context here:
https://github.com/huggingface/accelerate/pull/479. The training
still works by commenting out the assertion and avoid using the
accumulation wrapper with accelerator.accumulate(model)
- Accelerator.prepare call: We have noticed that using the optimizer
returned by this API are not directly reusable. It is due to gaps
in configuring accelerate API for XLA devices.
</pre></body></html>
|
2023-09-29T20:54:47.720Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/bert.rst.txt
|
```
.. _hf-bert-pretraining-tutorial:
Hugging Face BERT Pretraining Tutorial
======================================
This tutorial explains how to run Hugging Face BERT-Large model
pretraining on Trainium using PyTorch Neuron.
The Hugging Face BERT pretraining example demonstrates the steps
required to perform single-node, multi-accelerator PyTorch model
training using the new AWS EC2 Trn1 (Trainium) instances and the AWS
Neuron SDK. This tutorial is an adaptation of an existing `BERT
example <https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/run_pretraining.py>`__
with the following important characteristics:
- Framework: PyTorch/XLA
- Model: Hugging Face BertForPreTraining
- Optimizer: AdamW, LAMB (Layerwise Adaptive Moments optimizer)
- Scheduler: Hugging Face's get_linear_schedule_with_warmup
- Allreduce occurs before optimizer step, after gradient accumulations
(following DeepSpeed's Smart Gradient Accumulation)
- Training data types: Float32, full BFloat16 casting and Stochastic Rounding (SR), PyTorch Autocast (Automatic Mixed Precision or AMP) and SR
As done in the original BERT paper, BERT pretraining happens in two
phases. In the first phase (phase 1) BERT maximum sequence length is fixed
at 128 tokens, while in phase 2 it is fixed at 512 tokens.
Neuron provides access to Trainium devices through an extension of PyTorch/XLA - a library that includes the familiar PyTorch interface along with XLA-specific additions. For additional details
relating to PyTorch/XLA, please refer to the `official PyTorch/XLA
documentation <https://pytorch.org/xla/>`__.
.. contents:: Table of Contents
:local:
:depth: 3
.. include:: ../note-performance.txt
Phase 1 BERT-Large pretraining
------------------------------
Setting up the training environment on trn1.32xlarge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The BERT training script ``dp_bert_large_hf_pretrain_hdf5.py``
can run on a Trainium instance (trn1.32xlarge) that contains the
appropriate Neuron runtime and Python dependencies.
First, on a trn1.32xlarge instance, follow the installation instructions at:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you intent to run multiple experiments and save many checkpoints.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
Next, clone the AWS Neuron Samples repository and install requirements in the BERT tutorial directory ``aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain``:
.. code:: shell
cd ~/
git clone https://github.com/aws-neuron/aws-neuron-samples.git
python3 -m pip install -r ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain/requirements.txt
Downloading tokenized and sharded dataset files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To download the tokenized and sharded dataset files needed for this tutorial, please run the following commands:
.. code:: shell
mkdir -p ~/examples_datasets/
pushd ~/examples_datasets/
aws s3 cp s3://neuron-s3/training_datasets/bert_pretrain_wikicorpus_tokenized_hdf5/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar . --no-sign-request
tar -xf bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
rm bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
aws s3 cp s3://neuron-s3/training_datasets/bert_pretrain_wikicorpus_tokenized_hdf5/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512.tar . --no-sign-request
tar -xf bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512.tar
rm bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512.tar
popd
``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128`` will now have the tokenized and sharded dataset files for phase 1 pretraining and ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512`` for phase 2 pretraining.
Number of workers
~~~~~~~~~~~~~~~~~
You will be using torchrun (`PyTorch's Elastic Launch <https://pytorch.org/docs/stable/elastic/run.html>`__) to run some of the commands in this tutorial. When running the training script, you can configure the number of
NeuronCores to use for training by using torchrun's :option:`--nproc_per_node` option. In this tutorial, we use 32 NeuronCores on trn1.32xlarge.
.. note::
Currently Neuron Runtime only support 1 and 2 worker configurations on trn1.2xlarge and 1, 2, 8, and 32-worker configurations on trn1.32xlarge.
.. _bf16_sr_phase1:
BFloat16 and stochastic rounding in phase 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Phase 1 pretraining performance can be increased by using BFloat16 casting
and stochastic rounding. BFloat16 casting and stochastic rounding can be enabled by setting environment
variable ``XLA_USE_BF16=1`` when
launching the pretraining job. ``XLA_DOWNCAST_BF16=1`` can also be used instead of ``XLA_USE_BF16=1`` to preserve part of the training loop in FP32. Here we use ``XLA_DOWNCAST_BF16=1`` to ensure smooth loss curve when loss averaging is used. We also preserve the optimizer states in FP32 using a modified HuggingFace AdamW implementation in order to match FP32 loss with BFloat16.
To achieve maximum performance while maintaining loss
convergence characteristics, we are using batch size of 16 and
gradient accumulation microsteps of 32 to maintain global batch size of 16384 for phase 1.
The batch size and gradient accumulation microstep changes can be set by
launching the BERT pretraining script ``dp_bert_large_hf_pretrain_hdf5.py`` with
command-line arguments ``--batch_size=16 --grad_accum_usteps=32``, as seen in the following steps.
Another option with BFloat16 using PyTorch AutoCast (Automatic Mixed Precision or AMP) is covered at :ref:`amp-sr-phase1`.
Pre-compilation
~~~~~~~~~~~~~~~
PyTorch Neuron evaluates operations lazily during execution of the training loops, which means it builds a symbolic
graph in the background and the graph is executed in hardware only when the tensor is printed, transfered to CPU, or ``xm.mark_step()`` is encountered (``xm.mark_step()`` is implicitly called by ``pl.MpDeviceLoader/pl.ParallelLoader``). During execution of the training loops, PyTorch Neuron can build multiple graphs depending on the number of conditional paths taken. For BERT-Large pretraining, PyTorch Neuron builds multiple unique graphs that should be compiled before running on the NeuronCores. PyTorch Neuron will compile those graphs only if they are not in the XLA in-memory cache or the persistent cache. To reduce the compilation time of these graphs, you can pre-compile those graphs using the utility ``neuron_parallel_compile`` (provided by the ``libneuronxla`` package, a transitive dependency of ``torch-neuronx``) as shown:
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
neuron_parallel_compile XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--steps_this_run 10 \
--batch_size 16 \
--grad_accum_usteps 32 |& tee compile_log.txt
This command performs a fast trial run of the training script to build
graphs and then do parallel compilations on those graphs using multiple processes of Neuron Compiler before
populating the on-disk persistent cache with compiled graphs. This helps make
the actual training run faster because the compiled graphs will loaded from the persistent cache.
Currently it takes ~13 minutes to compile the BERT-Large model training step using the pre-compilation script (compare to ~40 minute if not using the pre-compilation script).
Note that the command above specifies 32 NeuronCores for trn1.32xlarge via --nproc_per_node option.
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script using ``neuron_parallel_compile ./run_dp_bert_large_hf_pretrain_bf16_s128.sh`` to start the precompilation.
The pretokenized dataset is expected to be at ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128/`` by default (see above for downloading instructions) and can be changed via the ``--data_dir`` option.
.. note::
The trial run during pre-compilation currently outputs invalid loss numbers. Please disregard them.
.. note::
The command after ``neuron_parallel_compile`` should match the actual run command, except for the option :option:`--steps_this_run` which shortens the trial run just enough to allow the tool to build all the graphs needed for the actual run.
If you interrupt
the run and restart the execution without changing model configurations or training hyperparameters, the new run will detect the cached
graphs in the persistent cache (on-disk) and reload the compiled graphs for
execution, avoiding any recompilation time.
Changes made to the BERT model configuration (layers, hidden
size, attention heads in the get_model function), batch size (using
:option:`--batch_size` option), optimizer or number of workers may trigger
graph recompilation. It is best to rerun the pre-compilation step above if these changes are made.
You can adjust the following hyperparameters without changing the model
and causing recompilation:
- Number of global steps to run (:option:`--steps_this_run` option)
- Learning rate (:option:`--lr` option)
- Gradient accumulation steps > 1 (:option:`--grad_accum_usteps` option). If
1 then there's no gradient accumulation and the graphs change causing
recompilation.
Initiating a Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~
After running the pre-compilation step, continue
with the actual phase 1 pretraining by running the following
set of commands to launch 32 data parallel distributed training workers on trn1.32xlarge:
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--grad_accum_usteps 32 |& tee run_pretrain_log.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script to start the training.
As the training script launches, you will initially see several console
messages indicating that the Neuron Runtime is initializing:
.. code:: bash
Using Neuron Runtime
Using Neuron Runtime
Using Neuron Runtime
Using Neuron Runtime
Using Neuron Runtime
...
A few moments later, you will see the Training Configuration and Model
Configuration in the output:
.. code:: bash
--------TRAINING CONFIG----------
Namespace(batch_size=16, data_dir='~/examples_datasets/
bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128/', debug=False,
enable_pt_autocast=False, grad_accum_usteps=32, local_rank=0, lr=0.0004,
max_pred_len=20, max_steps=28125, metrics_file='/tmp/test_dict.json',
minimal_ckpt=False, num_ckpts_to_keep=1, output_dir='./output',
phase1_end_step=28125, phase2=False, resume_ckpt=False, resume_step=-1,
seed=12349, seq_len=128, shards_per_ckpt=1, steps_this_run=28125, warmup_steps=2000)
.. code:: bash
--------MODEL CONFIG----------
BertConfig {
"_name_or_path": "bert-large-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.15.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
As the worker processes begin training on the BERT dataset, you will
begin to see training metrics and the learning rate logged to the
console approximately every training step. The metrics include
average_loss, step_loss, and throughput:
.. code:: bash
LOG Thu Sep 29 22:30:10 2022 - (0, 78) step_loss : 9.1875 learning_rate : 1.56e-05 throughput : 2873.14
LOG Thu Sep 29 22:30:16 2022 - (0, 79) step_loss : 8.9375 learning_rate : 1.58e-05 throughput : 2878.09
LOG Thu Sep 29 22:30:22 2022 - (0, 80) step_loss : 9.0000 learning_rate : 1.60e-05 throughput : 2875.31
LOG Thu Sep 29 22:30:27 2022 - (0, 81) step_loss : 9.0000 learning_rate : 1.62e-05 throughput : 2877.35
LOG Thu Sep 29 22:30:33 2022 - (0, 82) step_loss : 8.8750 learning_rate : 1.64e-05 throughput : 2872.55
LOG Thu Sep 29 22:30:39 2022 - (0, 83) step_loss : 9.0000 learning_rate : 1.66e-05 throughput : 2876.17
LOG Thu Sep 29 22:30:44 2022 - (0, 84) step_loss : 9.1250 learning_rate : 1.68e-05 throughput : 2872.48
LOG Thu Sep 29 22:30:50 2022 - (0, 85) step_loss : 9.0000 learning_rate : 1.70e-05 throughput : 2873.39
By default, the training script will store all output files under
``~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain/output``. The output files consist of
the following:
- PyTorch model checkpoint files, with names containing the global step
of the checkpoint (ckpt_2000.pt, ckpt_4000.pt, etc.). Currently, the
training script saves a checkpoint after every dataset shard.
The frequency of saving checkpoint can be reduced by increasing the number of
dataset shards per checkpoint, using option :option:`--shards_per_ckpt`.
Furthermore, the number of checkpoints kept at a given time is limited by :option:`--num_ckpts_to_keep` option (currently default to 1).
- TensorBoard log files (each training run will store its logs in a
subdirectory with prefix ``neuron_tblogs_``).
Monitoring Progress of the Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using a single Trn1 instance with 32 NeuronCores, the current BERT
phase 1 pretraining will finish in about 45 hours. During this time, you will
see the average loss metric begin at about 11.2 and ultimately converge to about 1.4.
Monitoring Training Job Progress using neuron-top
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the training job still running, launch a second SSH connection into
the trn1 instance, and use the ``neuron-top`` command to examine the
aggregate NeuronCore utilization. If you have not modified the :option:`--nproc_per_node` option
in the run command, you should observe that
all 32 NeuronCores are participating in the training job, with
utilization fluctuating around 80%.
Monitoring Training Job Progress using TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The demo includes TensorBoard-compatible logging, which allows the
learning rate and training metrics to be monitored in real-time. By
default, the training script logs metrics to the following TensorBoard
log directory ``~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain/output/neuron_tblogs_<date/time>_<training configs>``.
In order to view your training metrics in TensorBoard, first run the
following commands in your SSH session:
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
tensorboard --logdir ./output
Once running, open a new SSH connection to the instance and port-forward
TCP port 6006 (ex: ``-L 6006:127.0.0.1:6006``). Once the tunnel is
established, TensorBoard can then be accessed via web browser at the
following URL: `http://localhost:6006 <http://localhost:6006/>`__.
Please note that you will not be able to access TensorBoard if you
disconnect your port-forwarding SSH session to the Trainium instance.
.. image:: tensorboard.png
:alt: Image: tensorboard.png
Finishing the tutorial
~~~~~~~~~~~~~~~~~~~~~~
Once you are ready, there are a couple of options for finishing
the BERT pretraining demo:
1. **Allow the training script to run to completion**. If you would like
to observe the training script run to completion, it is recommended
to launch the training script from a terminal multiplexer such as
``tmux`` or ``screen``, and then detach the session so that the
training script can run in the background. With this approach, you
can safely let the training script run unattended, without risk of an
SSH disconnection causing the training job to stop running.
2. **Stop the training job early**. To stop the training job early,
press CTRL-C in the terminal window in which you launched the
training script. In some cases, if you manually cancel a job using
CTRL-C and then later want to run the job again, you might first need
to execute ``sudo rmmod neuron; sudo modprobe neuron`` in order to
reload/reset the Neuron driver.
Phase 1 BERT-Large pretraining with Layerwise Adaptive Moments based optimizer (LAMB)
-------------------------------------------------------------------------------------
Sometimes, to reduce the training wall time, you can use higher learning rate and larger global batch size. The approach is discussed in `LARGE BATCH OPTIMIZATION FOR DEEP LEARNING: TRAINING BERT IN 76 MINUTES <https://arxiv.org/pdf/1904.00962.pdf>`__. Tranium supports LAMB, and in this tutorial, we use publicly available XLA-friendly LAMB implemenation from https://github.com/rwightman/pytorch-image-models/blob/master/timm/optim/lamb.py.
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--max_steps 7032 \
--batch_size 8 \
--optimizer LAMB \
--lr 6e-3 \
--grad_accum_usteps 256 |& tee run_pretrain_log.txt
The command-line argument ``--optimizer LAMB`` is needed, otherwise, the default optimizer AdamW will be used. Besides, you need to use a set of hyper-parameters supporting the larger global batch size (GBS). In this case, we have 64k as GBS for LAMB and use a set of hyper-params similar to https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md. Given higher GBS from LAMB than AdamW, it takes fewer steps (roughly 7k) to achieve similar level of accuracy as AdamW, which takes more than 28k steps. In addition, you can also use different data types on top of LAMB. Below is an example using the BFloat16 and Stochastic Roundings.
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--max_steps 7032 \
--batch_size 16 \
--optimizer LAMB \
--lr 6e-3 \
--grad_accum_usteps 128 |& tee run_pretrain_log.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128_lamb.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script to start the training.
.. _amp-sr-phase1:
Phase 1 BERT-Large pretraining with PyTorch Autocast (AMP) and stochastic rounding
----------------------------------------------------------------------------------
Besides the :ref:`bf16_sr_phase1` , you can also use AMP with stochastic rounding. The detailed background is at https://pytorch.org/docs/stable/amp.html. It uses both data types BFloat16 and Float32, hence provides better performance over Float32. A detailed comparison is available at :ref:`trn1_training_perf`.
To launch the AMP, one additional command-line argument is needed ``--enable_pt_autocast``. ``NEURON_RT_STOCHASTIC_ROUNDING_EN=1`` enables the stochastic roundings.
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
NEURON_RT_STOCHASTIC_ROUNDING_EN=1 \
torchrun --nproc_per_node=32 dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--enable_pt_autocast \
--grad_accum_usteps 32 |& tee run_pretrain_log.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script with ``amp`` option like ``./run_dp_bert_large_hf_pretrain_bf16_s128.sh amp`` to start the training with AMP.
Phase 1 BERT-Large pretraining on two instances
-----------------------------------------------
If you have two trn1.32xlarge instances with EFA-enabled interfaces, using `EFA-enabled security group <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa-start-nccl-base.html#nccl-start-base-setup>`__, and setup using :ref:`Install PyTorch Neuron on Trn1 <pytorch-neuronx-install>`, you can run
multi-instance BERT-Large pretraining. The following example demonstrate running BERT phase 1 pretraining on two instances.
To ensure that the global batch size remains at 16384 for phase 1, the gradient accumulation microstep count is reduced by half when the number of instances is 2.
NOTE: To run on multiple instances, you will need to use trn1.32xlarge instances and using all 32 NeuronCores on each instance.
On the rank-0 Trn1 host (root), run with ``--node_rank=0`` using torchrun utility, and ``--master_addr`` set to rank-0 host's IP address:
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
export BUCKET_CAP_MB=512
export XLA_TRANSFER_SEED_ASYNC=1
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 --nnodes=2 --node_rank=0 --master_addr=<root IP> --master_port=2020 \
dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--grad_accum_usteps 16 |& tee run_pretrain_log.txt
On another Trn1 host, run with ``--node_rank=1``, and ``--master_addr`` also set to rank-0 host's IP address:
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
export BUCKET_CAP_MB=512
export XLA_TRANSFER_SEED_ASYNC=1
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 --nnodes=2 --node_rank=1 --master_addr=<root IP> --master_port=2020 \
dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--grad_accum_usteps 16 |& tee run_pretrain_log.txt
It is important to launch rank-0 worker with ``--node_rank=0`` to avoid hang.
To train on multiple instances, it is recommended to use a ParallelCluster. For a ParallelCluster example, please see `Train a model on AWS Trn1 ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`__.
Phase 2 BERT-Large pretraining
------------------------------
As mentioned above, BERT pretraining happens in two
phases. In phase 1, the sequence length is 128.
In phase 2, the sequence length increases to 512.
This additional training phase will further reduce the pretraining
loss and improve the metrics for the fine-tune tasks that usually
follow. The setup is very similar to the phase 1, with some differences
in training environment and command line arguments highlighted below.
Training Environment
~~~~~~~~~~~~~~~~~~~~
The following dataset and checkpoint are required:
* ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512`` is WikiCorpus training dataset that is preprocessed (tokenized and pre-masked) for phase 2.
* ``~/examples/dp_bert_hf_pretrain/output/ckpt_<phase1_end_step>.pt`` is the final checkpoint from phase 1. It’s generated automatically at the end of phase 1 pretraining. For convenience, one can also download the example available at ``s3://neuron-s3/training_checkpoints/pytorch/dp_bert_large_hf_pretrain/ckpt_28125.pt``, which is collected after 28125 training steps in phase 1. Phase 2 will continue training by loading this checkpoint. During its progression, phase 2 continues to generate its own checkpoints in output directory, following the naming convention ``ckpt_<global_steps>.pt``
Initiating a Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~
To launch the phase 2 pretraining job with AdamW optimizer, run the same python script ``dp_bert_large_hf_pretrain_hdf5.py``
as before except with different options for phase 2. Again, we are using BFloat16 casting and stochastic rounding
by setting environment variable ``XLA_DOWNCAST_BF16=1``. For phase 2, we are using global batch size of 32768, with worker device batch size of 2
and gradient accumulation microsteps of 512. The pretokenized dataset is expected to be at ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512/`` following the setup steps above and is set via ``--data_dir`` option.
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 dp_bert_large_hf_pretrain_hdf5.py \
--data_dir ~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512/ \
--lr 2.8e-4 \
--phase2 \
--resume_ckpt \
--phase1_end_step 28125 \
--batch_size 2 \
--grad_accum_usteps 512 \
--seq_len 512 \
--max_pred_len 80 \
--warmup_steps 781 \
--max_steps 1563 \
|& tee run_pretrain_log_phase2.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script to start the training with AdamW optimizer. Similarly, you can use LAMB optimizer using the script ``run_dp_bert_large_hf_pretrain_bf16_s512_lamb_phase2.sh``.
The output below is expected as the job is initiated. Step 28125 is the phase1_end_step in this run, which could be different if phase1 training stops at a different global step.
.. code:: shell
Worker 21 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 23 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 27 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 26 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 20 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 22 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
--------TRAINING CONFIG----------
Namespace(batch_size=2, data_dir='/home/ec2-user/examples_datasets/
bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512/', debug=False,
enable_pt_autocast=False, grad_accum_usteps=512, local_rank=0, lr=0.0002,
max_pred_len=80, max_steps=28125, metrics_file='/tmp/test_dict.json',
minimal_ckpt=False, num_ckpts_to_keep=1, output_dir='./output',
phase1_end_step=28125, phase2=True, resume_ckpt=True, resume_step=-1,
seed=12349, seq_len=512, shards_per_ckpt=1, steps_this_run=32, warmup_steps=781)
--------MODEL CONFIG----------
BertConfig {
"_name_or_path": "bert-large-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.15.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
As the phase 2 training proceeds, similar metrics to phase 1 will appear on the console, showing the loss, learning rate, and throughput:
.. code:: shell
LOG Tue Sep 27 20:56:35 2022 - (0, 26) step_loss : 4.3438 learning_rate : 6.66e-06 throughput : 494.55
LOG Tue Sep 27 20:57:40 2022 - (0, 27) step_loss : 4.0938 learning_rate : 6.91e-06 throughput : 495.67
LOG Tue Sep 27 20:58:46 2022 - (0, 28) step_loss : 4.1875 learning_rate : 7.17e-06 throughput : 496.18
LOG Tue Sep 27 20:59:53 2022 - (0, 29) step_loss : 4.0000 learning_rate : 7.43e-06 throughput : 495.31
LOG Tue Sep 27 21:00:58 2022 - (0, 30) step_loss : 4.2500 learning_rate : 7.68e-06 throughput : 495.60
LOG Tue Sep 27 21:02:05 2022 - (0, 31) step_loss : 4.3125 learning_rate : 7.94e-06 throughput : 495.50
LOG Tue Sep 27 21:03:10 2022 - (0, 32) step_loss : 4.4688 learning_rate : 8.19e-06 throughput : 496.02
Tools
-----
While running the tutorial, try experimenting with the following Neuron
tools, which help monitor and evaluate compute utilization in real-time:
neuron-ls
~~~~~~~~~
The ``neuron-ls`` command describes the number of Neuron devices present
in the system, along with the associated NeuronCore count, memory, and
PCI device information:
.. image:: neuron-ls.png
:alt: Image: image.png
You will find that the Trn1 instance has 16 Neuron devices, each with 2
NeuronCores. This configuration allows you to train the model using a
total of 32 workers, one per NeuronCore, within a single instance.
Additional information regarding neuron-ls can be found in the
`neuron-ls user
guide <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/neuron-ls.html>`__.
neuron-top
~~~~~~~~~~
The ``neuron-top`` command presents a high-level view of the Neuron
environment, including the utilization of each of the NeuronCores, any
models that are currently loaded onto one or more NeuronCores, process
IDs for any processes that are leveraging the Neuron runtime, and basic
system statistics relating to vCPU and memory usage.
Please note that ``neuron-top`` can either display aggregate NeuronCore
utilization for 'all' processes (the default), or alternatively display
the NeuronCore utilization for a particular process. You can toggle
through the aggregate and per-process views using the ``a`` and ``d``
keys. The screenshot below illustrates the default aggregate view:
.. image:: neuron-top.png
:alt: Image: image.png
Please refer to the `neuron-top user
guide <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/neuron-top-user-guide.html>`__
for additional details.
Generating tokenized and sharded dataset files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section is for generating tokenized and sharded dataset files from WikiCorpus dataset. If you just want the pregenenerated dataset files, please see ``Downloading tokenized and sharded dataset files`` section above.
On a c5n.18xlarge instance launched with Deep Learning Conda AMI and 512GB disk space, you can generate the preprocessed datasets from WikiCorpus dataset using NVidia's DeepLearningExamples for BERT pretraining. The preprocessing converts the WikiCorpus dataset to tokenized data and shard the data into multiple shards for parallel loading. The full flow takes about 8.7 hours:
.. code:: shell
source activate pytorch_latest_p37
cd ~/
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples
git checkout 81b9010096b6f9812e3977b607669f6ec8b16561
sudo mkdir -m a=rwx /workspace
cp -rf PyTorch/LanguageModeling/BERT /workspace/bert
cd /workspace
git clone https://github.com/attardi/wikiextractor.git
cd wikiextractor
git checkout 6408a430fc504a38b04d37ce5e7fc740191dee16
cd /workspace/bert
# increase num processes and shards
ex -s "+%s/\(bertPrep\.py\)\( --action create_hdf5_files\)/\1 --n_processes 32 --n_test_shards 1024 --n_training_shards 1024\2" "+wq" data/create_datasets_from_start.sh
export BERT_PREP_WORKING_DIR=/workspace/data/
time ./data/create_datasets_from_start.sh wiki_only |& tee log
After execution is finished, phase 1 pre-tokenized and sharded dataset is located at:
``/workspace/data/hdf5_lower_case_1_seq_len_128_max_pred_20_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/wikicorpus_en/``
Copy this entire directory to ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128`` of the trn1.32xlarge machine.
Phase 2 pre-tokenized dataset is located at:
``/workspace/data/hdf5_lower_case_1_seq_len_512_max_pred_80_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/wikicorpus_en/``
Copy this entire directory to ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512`` of the trn1.32xlarge machine.
Known issues and limitations
----------------------------
NaNs seen with transformers version >= 4.21.0 when running HF BERT fine-tuning or pretraining with XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When running HuggingFace BERT (any size) fine-tuning tutorial or pretraining tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you will see NaNs in the loss immediately at the first step. More details on the issue can be found at `pytorch/xla#4152 <https://github.com/pytorch/xla/issues/4152>`_. The workaround is to use 4.20.0 or earlier (the tutorials currently recommend version 4.15.0) or add ``transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16`` to the Python script.
BERT-large compilation limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Optimal BERT-large phase 1 (sequence length 128) batch size is currently 8 for FP32 and 16 for full BF16 with stochastic rounding.
Optimal BERT-large phase 2 (sequence length 512) batch size is currently 1 for FP32 and 2 for full BF16 with stochastic rounding.
BERT-large pretraining with pretokenized dataset hangs when using xm.save
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Currently, BERT-large pretraining with pretokenized dataset hangs when
``xm.save`` is used outside of the main training loop.
.. code:: python
Loop through HDF5 sharded dataset files:
Train on one HDF5 sharded dataset file
Loop through batched samples:
Training iteration
Save checkpoint using xm.save
The reason is that xm.save has a synchronization point. However, the
HDF5 shared data files do not have the same number of training samples
so the workers cannot all reach xm.save in the same iteration.
The workaround is to use ``xm._maybe_convert_to_cpu`` to ensure tensors
are moved to CPU followed by ``torch.save`` as done in the BERT-large
pretraining tutorial:
.. code:: python
cpu_data = xm._maybe_convert_to_cpu(data)
BERT-large two worker pretraining hangs or run out of host memory during checkpointing on trn1.2xlarge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On trn1.2xlarge, where there's limited host memory and CPU resources,
the BERT-large two worker pretraining may hang or run out of host memory during
checkpointing. This problem can be worked around by not saving optimizer and
LR scheduler states in the checkpoint. This is enabled by :option:`--minimal_ckpt` option
of the pretraining script.
BERT precompilation using neuron_parallel_compile hangs when using torchrun
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We use neuron_parallel_compile in front of the short run command to do precompilation. However, the following command hangs when running BERT parallel compilation with torchrun:
.. code:: bash
neuron_parallel_compile XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 --nnodes=1 dp_bert_large_hf_pretrain_hdf5.py --steps_this_run 5
...
Updating train metrics in provide results.json file
Current data: {'num_workers': 32, 'epoch': 0, 'steps': 5, 'microsteps': 320, 'loss': -22172234.0, 'train_time_minutes': 0.7424166639645894, 'throughput_average': 1839.0391805624324, 'throughput_peak': 1840.0107059878164, 'batch_size': 8, 'max_length': 128}
Updating with data: {'num_workers': 32, 'epoch': 0, 'steps': 5, 'microsteps': 320, 'loss': -22172234.0, 'train_time_minutes': 0.7826640844345093, 'throughput_average': 1744.4691285659471, 'throughput_peak': 1745.4964663587539, 'batch_size': 8, 'max_length': 128}
Checkpointing...
Checkpointing done...
(hangs)
The fix is to add xm.rendezvous at the end of training to ensure all workers sync up before exiting the script dp_bert_large_pretrain_hdf5.py.
.. code:: python
def _mp_fn(index, flags):
torch.set_default_tensor_type('torch.FloatTensor')
train_bert_hdf5(flags)
xm.rendezvous("_mp_fn finished")
Reduced multi-node performance with Neuron PyTorch 1.12 (release 2.6)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Default BERT performance on multiple instances is reduced with Neuron PyTorch 1.12 (release 2.6). The workaround is to set the XLA flag XLA_TRANSFER_SEED_ASYNC=1.
Troubleshooting
---------------
The following are troubleshooting tips related to this tutorial. See
:ref:`PyTorch Neuron on Trainium Troubleshooting
Guide <pytorch-neuron-traning-troubleshooting>` for additional troubleshooting
tips.
.. _modulenotfounderror-no-module-named-torch--torch_xla-transformers-etc:
ModuleNotFoundError: No module named 'torch' , 'torch_xla', 'transformers', etc
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you encounter 'ModuleNotFoundError' messages while attempting to run
the demo scripts, please ensure that you have activated the appropriate
Python *virtualenv* which contains all of the demo dependencies:
.. code:: bash
cd ~
source <python virtual environment path>/bin/activate
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _hf-bert-pretraining-tutorial:
Hugging Face BERT Pretraining Tutorial
======================================
This tutorial explains how to run Hugging Face BERT-Large model
pretraining on Trainium using PyTorch Neuron.
The Hugging Face BERT pretraining example demonstrates the steps
required to perform single-node, multi-accelerator PyTorch model
training using the new AWS EC2 Trn1 (Trainium) instances and the AWS
Neuron SDK. This tutorial is an adaptation of an existing `BERT
example <https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/run_pretraining.py>`__
with the following important characteristics:
- Framework: PyTorch/XLA
- Model: Hugging Face BertForPreTraining
- Optimizer: AdamW, LAMB (Layerwise Adaptive Moments optimizer)
- Scheduler: Hugging Face's get_linear_schedule_with_warmup
- Allreduce occurs before optimizer step, after gradient accumulations
(following DeepSpeed's Smart Gradient Accumulation)
- Training data types: Float32, full BFloat16 casting and Stochastic Rounding (SR), PyTorch Autocast (Automatic Mixed Precision or AMP) and SR
As done in the original BERT paper, BERT pretraining happens in two
phases. In the first phase (phase 1) BERT maximum sequence length is fixed
at 128 tokens, while in phase 2 it is fixed at 512 tokens.
Neuron provides access to Trainium devices through an extension of PyTorch/XLA - a library that includes the familiar PyTorch interface along with XLA-specific additions. For additional details
relating to PyTorch/XLA, please refer to the `official PyTorch/XLA
documentation <https://pytorch.org/xla/>`__.
.. contents:: Table of Contents
:local:
:depth: 3
.. include:: ../note-performance.txt
Phase 1 BERT-Large pretraining
------------------------------
Setting up the training environment on trn1.32xlarge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The BERT training script ``dp_bert_large_hf_pretrain_hdf5.py``
can run on a Trainium instance (trn1.32xlarge) that contains the
appropriate Neuron runtime and Python dependencies.
First, on a trn1.32xlarge instance, follow the installation instructions at:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you intent to run multiple experiments and save many checkpoints.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
Next, clone the AWS Neuron Samples repository and install requirements in the BERT tutorial directory ``aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain``:
.. code:: shell
cd ~/
git clone https://github.com/aws-neuron/aws-neuron-samples.git
python3 -m pip install -r ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain/requirements.txt
Downloading tokenized and sharded dataset files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To download the tokenized and sharded dataset files needed for this tutorial, please run the following commands:
.. code:: shell
mkdir -p ~/examples_datasets/
pushd ~/examples_datasets/
aws s3 cp s3://neuron-s3/training_datasets/bert_pretrain_wikicorpus_tokenized_hdf5/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar . --no-sign-request
tar -xf bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
rm bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
aws s3 cp s3://neuron-s3/training_datasets/bert_pretrain_wikicorpus_tokenized_hdf5/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512.tar . --no-sign-request
tar -xf bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512.tar
rm bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512.tar
popd
``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128`` will now have the tokenized and sharded dataset files for phase 1 pretraining and ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512`` for phase 2 pretraining.
Number of workers
~~~~~~~~~~~~~~~~~
You will be using torchrun (`PyTorch's Elastic Launch <https://pytorch.org/docs/stable/elastic/run.html>`__) to run some of the commands in this tutorial. When running the training script, you can configure the number of
NeuronCores to use for training by using torchrun's :option:`--nproc_per_node` option. In this tutorial, we use 32 NeuronCores on trn1.32xlarge.
.. note::
Currently Neuron Runtime only support 1 and 2 worker configurations on trn1.2xlarge and 1, 2, 8, and 32-worker configurations on trn1.32xlarge.
.. _bf16_sr_phase1:
BFloat16 and stochastic rounding in phase 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Phase 1 pretraining performance can be increased by using BFloat16 casting
and stochastic rounding. BFloat16 casting and stochastic rounding can be enabled by setting environment
variable ``XLA_USE_BF16=1`` when
launching the pretraining job. ``XLA_DOWNCAST_BF16=1`` can also be used instead of ``XLA_USE_BF16=1`` to preserve part of the training loop in FP32. Here we use ``XLA_DOWNCAST_BF16=1`` to ensure smooth loss curve when loss averaging is used. We also preserve the optimizer states in FP32 using a modified HuggingFace AdamW implementation in order to match FP32 loss with BFloat16.
To achieve maximum performance while maintaining loss
convergence characteristics, we are using batch size of 16 and
gradient accumulation microsteps of 32 to maintain global batch size of 16384 for phase 1.
The batch size and gradient accumulation microstep changes can be set by
launching the BERT pretraining script ``dp_bert_large_hf_pretrain_hdf5.py`` with
command-line arguments ``--batch_size=16 --grad_accum_usteps=32``, as seen in the following steps.
Another option with BFloat16 using PyTorch AutoCast (Automatic Mixed Precision or AMP) is covered at :ref:`amp-sr-phase1`.
Pre-compilation
~~~~~~~~~~~~~~~
PyTorch Neuron evaluates operations lazily during execution of the training loops, which means it builds a symbolic
graph in the background and the graph is executed in hardware only when the tensor is printed, transfered to CPU, or ``xm.mark_step()`` is encountered (``xm.mark_step()`` is implicitly called by ``pl.MpDeviceLoader/pl.ParallelLoader``). During execution of the training loops, PyTorch Neuron can build multiple graphs depending on the number of conditional paths taken. For BERT-Large pretraining, PyTorch Neuron builds multiple unique graphs that should be compiled before running on the NeuronCores. PyTorch Neuron will compile those graphs only if they are not in the XLA in-memory cache or the persistent cache. To reduce the compilation time of these graphs, you can pre-compile those graphs using the utility ``neuron_parallel_compile`` (provided by the ``libneuronxla`` package, a transitive dependency of ``torch-neuronx``) as shown:
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
neuron_parallel_compile XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--steps_this_run 10 \
--batch_size 16 \
--grad_accum_usteps 32 |& tee compile_log.txt
This command performs a fast trial run of the training script to build
graphs and then do parallel compilations on those graphs using multiple processes of Neuron Compiler before
populating the on-disk persistent cache with compiled graphs. This helps make
the actual training run faster because the compiled graphs will loaded from the persistent cache.
Currently it takes ~13 minutes to compile the BERT-Large model training step using the pre-compilation script (compare to ~40 minute if not using the pre-compilation script).
Note that the command above specifies 32 NeuronCores for trn1.32xlarge via --nproc_per_node option.
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script using ``neuron_parallel_compile ./run_dp_bert_large_hf_pretrain_bf16_s128.sh`` to start the precompilation.
The pretokenized dataset is expected to be at ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128/`` by default (see above for downloading instructions) and can be changed via the ``--data_dir`` option.
.. note::
The trial run during pre-compilation currently outputs invalid loss numbers. Please disregard them.
.. note::
The command after ``neuron_parallel_compile`` should match the actual run command, except for the option :option:`--steps_this_run` which shortens the trial run just enough to allow the tool to build all the graphs needed for the actual run.
If you interrupt
the run and restart the execution without changing model configurations or training hyperparameters, the new run will detect the cached
graphs in the persistent cache (on-disk) and reload the compiled graphs for
execution, avoiding any recompilation time.
Changes made to the BERT model configuration (layers, hidden
size, attention heads in the get_model function), batch size (using
:option:`--batch_size` option), optimizer or number of workers may trigger
graph recompilation. It is best to rerun the pre-compilation step above if these changes are made.
You can adjust the following hyperparameters without changing the model
and causing recompilation:
- Number of global steps to run (:option:`--steps_this_run` option)
- Learning rate (:option:`--lr` option)
- Gradient accumulation steps > 1 (:option:`--grad_accum_usteps` option). If
1 then there's no gradient accumulation and the graphs change causing
recompilation.
Initiating a Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~
After running the pre-compilation step, continue
with the actual phase 1 pretraining by running the following
set of commands to launch 32 data parallel distributed training workers on trn1.32xlarge:
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--grad_accum_usteps 32 |& tee run_pretrain_log.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script to start the training.
As the training script launches, you will initially see several console
messages indicating that the Neuron Runtime is initializing:
.. code:: bash
Using Neuron Runtime
Using Neuron Runtime
Using Neuron Runtime
Using Neuron Runtime
Using Neuron Runtime
...
A few moments later, you will see the Training Configuration and Model
Configuration in the output:
.. code:: bash
--------TRAINING CONFIG----------
Namespace(batch_size=16, data_dir='~/examples_datasets/
bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128/', debug=False,
enable_pt_autocast=False, grad_accum_usteps=32, local_rank=0, lr=0.0004,
max_pred_len=20, max_steps=28125, metrics_file='/tmp/test_dict.json',
minimal_ckpt=False, num_ckpts_to_keep=1, output_dir='./output',
phase1_end_step=28125, phase2=False, resume_ckpt=False, resume_step=-1,
seed=12349, seq_len=128, shards_per_ckpt=1, steps_this_run=28125, warmup_steps=2000)
.. code:: bash
--------MODEL CONFIG----------
BertConfig {
"_name_or_path": "bert-large-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.15.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
As the worker processes begin training on the BERT dataset, you will
begin to see training metrics and the learning rate logged to the
console approximately every training step. The metrics include
average_loss, step_loss, and throughput:
.. code:: bash
LOG Thu Sep 29 22:30:10 2022 - (0, 78) step_loss : 9.1875 learning_rate : 1.56e-05 throughput : 2873.14
LOG Thu Sep 29 22:30:16 2022 - (0, 79) step_loss : 8.9375 learning_rate : 1.58e-05 throughput : 2878.09
LOG Thu Sep 29 22:30:22 2022 - (0, 80) step_loss : 9.0000 learning_rate : 1.60e-05 throughput : 2875.31
LOG Thu Sep 29 22:30:27 2022 - (0, 81) step_loss : 9.0000 learning_rate : 1.62e-05 throughput : 2877.35
LOG Thu Sep 29 22:30:33 2022 - (0, 82) step_loss : 8.8750 learning_rate : 1.64e-05 throughput : 2872.55
LOG Thu Sep 29 22:30:39 2022 - (0, 83) step_loss : 9.0000 learning_rate : 1.66e-05 throughput : 2876.17
LOG Thu Sep 29 22:30:44 2022 - (0, 84) step_loss : 9.1250 learning_rate : 1.68e-05 throughput : 2872.48
LOG Thu Sep 29 22:30:50 2022 - (0, 85) step_loss : 9.0000 learning_rate : 1.70e-05 throughput : 2873.39
By default, the training script will store all output files under
``~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain/output``. The output files consist of
the following:
- PyTorch model checkpoint files, with names containing the global step
of the checkpoint (ckpt_2000.pt, ckpt_4000.pt, etc.). Currently, the
training script saves a checkpoint after every dataset shard.
The frequency of saving checkpoint can be reduced by increasing the number of
dataset shards per checkpoint, using option :option:`--shards_per_ckpt`.
Furthermore, the number of checkpoints kept at a given time is limited by :option:`--num_ckpts_to_keep` option (currently default to 1).
- TensorBoard log files (each training run will store its logs in a
subdirectory with prefix ``neuron_tblogs_``).
Monitoring Progress of the Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using a single Trn1 instance with 32 NeuronCores, the current BERT
phase 1 pretraining will finish in about 45 hours. During this time, you will
see the average loss metric begin at about 11.2 and ultimately converge to about 1.4.
Monitoring Training Job Progress using neuron-top
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the training job still running, launch a second SSH connection into
the trn1 instance, and use the ``neuron-top`` command to examine the
aggregate NeuronCore utilization. If you have not modified the :option:`--nproc_per_node` option
in the run command, you should observe that
all 32 NeuronCores are participating in the training job, with
utilization fluctuating around 80%.
Monitoring Training Job Progress using TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The demo includes TensorBoard-compatible logging, which allows the
learning rate and training metrics to be monitored in real-time. By
default, the training script logs metrics to the following TensorBoard
log directory ``~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain/output/neuron_tblogs_<date/time>_<training configs>``.
In order to view your training metrics in TensorBoard, first run the
following commands in your SSH session:
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
tensorboard --logdir ./output
Once running, open a new SSH connection to the instance and port-forward
TCP port 6006 (ex: ``-L 6006:127.0.0.1:6006``). Once the tunnel is
established, TensorBoard can then be accessed via web browser at the
following URL: `http://localhost:6006 <http://localhost:6006/>`__.
Please note that you will not be able to access TensorBoard if you
disconnect your port-forwarding SSH session to the Trainium instance.
.. image:: tensorboard.png
:alt: Image: tensorboard.png
Finishing the tutorial
~~~~~~~~~~~~~~~~~~~~~~
Once you are ready, there are a couple of options for finishing
the BERT pretraining demo:
1. **Allow the training script to run to completion**. If you would like
to observe the training script run to completion, it is recommended
to launch the training script from a terminal multiplexer such as
``tmux`` or ``screen``, and then detach the session so that the
training script can run in the background. With this approach, you
can safely let the training script run unattended, without risk of an
SSH disconnection causing the training job to stop running.
2. **Stop the training job early**. To stop the training job early,
press CTRL-C in the terminal window in which you launched the
training script. In some cases, if you manually cancel a job using
CTRL-C and then later want to run the job again, you might first need
to execute ``sudo rmmod neuron; sudo modprobe neuron`` in order to
reload/reset the Neuron driver.
Phase 1 BERT-Large pretraining with Layerwise Adaptive Moments based optimizer (LAMB)
-------------------------------------------------------------------------------------
Sometimes, to reduce the training wall time, you can use higher learning rate and larger global batch size. The approach is discussed in `LARGE BATCH OPTIMIZATION FOR DEEP LEARNING: TRAINING BERT IN 76 MINUTES <https://arxiv.org/pdf/1904.00962.pdf>`__. Tranium supports LAMB, and in this tutorial, we use publicly available XLA-friendly LAMB implemenation from https://github.com/rwightman/pytorch-image-models/blob/master/timm/optim/lamb.py.
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--max_steps 7032 \
--batch_size 8 \
--optimizer LAMB \
--lr 6e-3 \
--grad_accum_usteps 256 |& tee run_pretrain_log.txt
The command-line argument ``--optimizer LAMB`` is needed, otherwise, the default optimizer AdamW will be used. Besides, you need to use a set of hyper-parameters supporting the larger global batch size (GBS). In this case, we have 64k as GBS for LAMB and use a set of hyper-params similar to https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md. Given higher GBS from LAMB than AdamW, it takes fewer steps (roughly 7k) to achieve similar level of accuracy as AdamW, which takes more than 28k steps. In addition, you can also use different data types on top of LAMB. Below is an example using the BFloat16 and Stochastic Roundings.
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
dp_bert_large_hf_pretrain_hdf5.py \
--max_steps 7032 \
--batch_size 16 \
--optimizer LAMB \
--lr 6e-3 \
--grad_accum_usteps 128 |& tee run_pretrain_log.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128_lamb.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script to start the training.
.. _amp-sr-phase1:
Phase 1 BERT-Large pretraining with PyTorch Autocast (AMP) and stochastic rounding
----------------------------------------------------------------------------------
Besides the :ref:`bf16_sr_phase1` , you can also use AMP with stochastic rounding. The detailed background is at https://pytorch.org/docs/stable/amp.html. It uses both data types BFloat16 and Float32, hence provides better performance over Float32. A detailed comparison is available at :ref:`trn1_training_perf`.
To launch the AMP, one additional command-line argument is needed ``--enable_pt_autocast``. ``NEURON_RT_STOCHASTIC_ROUNDING_EN=1`` enables the stochastic roundings.
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
NEURON_RT_STOCHASTIC_ROUNDING_EN=1 \
torchrun --nproc_per_node=32 dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--enable_pt_autocast \
--grad_accum_usteps 32 |& tee run_pretrain_log.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script with ``amp`` option like ``./run_dp_bert_large_hf_pretrain_bf16_s128.sh amp`` to start the training with AMP.
Phase 1 BERT-Large pretraining on two instances
-----------------------------------------------
If you have two trn1.32xlarge instances with EFA-enabled interfaces, using `EFA-enabled security group <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa-start-nccl-base.html#nccl-start-base-setup>`__, and setup using :ref:`Install PyTorch Neuron on Trn1 <pytorch-neuronx-install>`, you can run
multi-instance BERT-Large pretraining. The following example demonstrate running BERT phase 1 pretraining on two instances.
To ensure that the global batch size remains at 16384 for phase 1, the gradient accumulation microstep count is reduced by half when the number of instances is 2.
NOTE: To run on multiple instances, you will need to use trn1.32xlarge instances and using all 32 NeuronCores on each instance.
On the rank-0 Trn1 host (root), run with ``--node_rank=0`` using torchrun utility, and ``--master_addr`` set to rank-0 host's IP address:
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
export BUCKET_CAP_MB=512
export XLA_TRANSFER_SEED_ASYNC=1
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 --nnodes=2 --node_rank=0 --master_addr=<root IP> --master_port=2020 \
dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--grad_accum_usteps 16 |& tee run_pretrain_log.txt
On another Trn1 host, run with ``--node_rank=1``, and ``--master_addr`` also set to rank-0 host's IP address:
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
export BUCKET_CAP_MB=512
export XLA_TRANSFER_SEED_ASYNC=1
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 --nnodes=2 --node_rank=1 --master_addr=<root IP> --master_port=2020 \
dp_bert_large_hf_pretrain_hdf5.py \
--batch_size 16 \
--grad_accum_usteps 16 |& tee run_pretrain_log.txt
It is important to launch rank-0 worker with ``--node_rank=0`` to avoid hang.
To train on multiple instances, it is recommended to use a ParallelCluster. For a ParallelCluster example, please see `Train a model on AWS Trn1 ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`__.
Phase 2 BERT-Large pretraining
------------------------------
As mentioned above, BERT pretraining happens in two
phases. In phase 1, the sequence length is 128.
In phase 2, the sequence length increases to 512.
This additional training phase will further reduce the pretraining
loss and improve the metrics for the fine-tune tasks that usually
follow. The setup is very similar to the phase 1, with some differences
in training environment and command line arguments highlighted below.
Training Environment
~~~~~~~~~~~~~~~~~~~~
The following dataset and checkpoint are required:
* ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512`` is WikiCorpus training dataset that is preprocessed (tokenized and pre-masked) for phase 2.
* ``~/examples/dp_bert_hf_pretrain/output/ckpt_<phase1_end_step>.pt`` is the final checkpoint from phase 1. It’s generated automatically at the end of phase 1 pretraining. For convenience, one can also download the example available at ``s3://neuron-s3/training_checkpoints/pytorch/dp_bert_large_hf_pretrain/ckpt_28125.pt``, which is collected after 28125 training steps in phase 1. Phase 2 will continue training by loading this checkpoint. During its progression, phase 2 continues to generate its own checkpoints in output directory, following the naming convention ``ckpt_<global_steps>.pt``
Initiating a Training Job
~~~~~~~~~~~~~~~~~~~~~~~~~
To launch the phase 2 pretraining job with AdamW optimizer, run the same python script ``dp_bert_large_hf_pretrain_hdf5.py``
as before except with different options for phase 2. Again, we are using BFloat16 casting and stochastic rounding
by setting environment variable ``XLA_DOWNCAST_BF16=1``. For phase 2, we are using global batch size of 32768, with worker device batch size of 2
and gradient accumulation microsteps of 512. The pretokenized dataset is expected to be at ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512/`` following the setup steps above and is set via ``--data_dir`` option.
.. code:: shell
cd ~/aws-neuron-samples/torch-neuronx/training/dp_bert_hf_pretrain
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 dp_bert_large_hf_pretrain_hdf5.py \
--data_dir ~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512/ \
--lr 2.8e-4 \
--phase2 \
--resume_ckpt \
--phase1_end_step 28125 \
--batch_size 2 \
--grad_accum_usteps 512 \
--seq_len 512 \
--max_pred_len 80 \
--warmup_steps 781 \
--max_steps 1563 \
|& tee run_pretrain_log_phase2.txt
The script ``run_dp_bert_large_hf_pretrain_bf16_s128.sh`` is provided in the same BERT tutorial directory for convenience and you can simply run the script to start the training with AdamW optimizer. Similarly, you can use LAMB optimizer using the script ``run_dp_bert_large_hf_pretrain_bf16_s512_lamb_phase2.sh``.
The output below is expected as the job is initiated. Step 28125 is the phase1_end_step in this run, which could be different if phase1 training stops at a different global step.
.. code:: shell
Worker 21 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 23 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 27 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 26 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 20 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
Worker 22 resuming from checkpoint ./output/ckpt_28125.pt at step 28125
--------TRAINING CONFIG----------
Namespace(batch_size=2, data_dir='/home/ec2-user/examples_datasets/
bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512/', debug=False,
enable_pt_autocast=False, grad_accum_usteps=512, local_rank=0, lr=0.0002,
max_pred_len=80, max_steps=28125, metrics_file='/tmp/test_dict.json',
minimal_ckpt=False, num_ckpts_to_keep=1, output_dir='./output',
phase1_end_step=28125, phase2=True, resume_ckpt=True, resume_step=-1,
seed=12349, seq_len=512, shards_per_ckpt=1, steps_this_run=32, warmup_steps=781)
--------MODEL CONFIG----------
BertConfig {
"_name_or_path": "bert-large-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 1024,
"initializer_range": 0.02,
"intermediate_size": 4096,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.15.0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
As the phase 2 training proceeds, similar metrics to phase 1 will appear on the console, showing the loss, learning rate, and throughput:
.. code:: shell
LOG Tue Sep 27 20:56:35 2022 - (0, 26) step_loss : 4.3438 learning_rate : 6.66e-06 throughput : 494.55
LOG Tue Sep 27 20:57:40 2022 - (0, 27) step_loss : 4.0938 learning_rate : 6.91e-06 throughput : 495.67
LOG Tue Sep 27 20:58:46 2022 - (0, 28) step_loss : 4.1875 learning_rate : 7.17e-06 throughput : 496.18
LOG Tue Sep 27 20:59:53 2022 - (0, 29) step_loss : 4.0000 learning_rate : 7.43e-06 throughput : 495.31
LOG Tue Sep 27 21:00:58 2022 - (0, 30) step_loss : 4.2500 learning_rate : 7.68e-06 throughput : 495.60
LOG Tue Sep 27 21:02:05 2022 - (0, 31) step_loss : 4.3125 learning_rate : 7.94e-06 throughput : 495.50
LOG Tue Sep 27 21:03:10 2022 - (0, 32) step_loss : 4.4688 learning_rate : 8.19e-06 throughput : 496.02
Tools
-----
While running the tutorial, try experimenting with the following Neuron
tools, which help monitor and evaluate compute utilization in real-time:
neuron-ls
~~~~~~~~~
The ``neuron-ls`` command describes the number of Neuron devices present
in the system, along with the associated NeuronCore count, memory, and
PCI device information:
.. image:: neuron-ls.png
:alt: Image: image.png
You will find that the Trn1 instance has 16 Neuron devices, each with 2
NeuronCores. This configuration allows you to train the model using a
total of 32 workers, one per NeuronCore, within a single instance.
Additional information regarding neuron-ls can be found in the
`neuron-ls user
guide <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/neuron-ls.html>`__.
neuron-top
~~~~~~~~~~
The ``neuron-top`` command presents a high-level view of the Neuron
environment, including the utilization of each of the NeuronCores, any
models that are currently loaded onto one or more NeuronCores, process
IDs for any processes that are leveraging the Neuron runtime, and basic
system statistics relating to vCPU and memory usage.
Please note that ``neuron-top`` can either display aggregate NeuronCore
utilization for 'all' processes (the default), or alternatively display
the NeuronCore utilization for a particular process. You can toggle
through the aggregate and per-process views using the ``a`` and ``d``
keys. The screenshot below illustrates the default aggregate view:
.. image:: neuron-top.png
:alt: Image: image.png
Please refer to the `neuron-top user
guide <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/neuron-top-user-guide.html>`__
for additional details.
Generating tokenized and sharded dataset files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This section is for generating tokenized and sharded dataset files from WikiCorpus dataset. If you just want the pregenenerated dataset files, please see ``Downloading tokenized and sharded dataset files`` section above.
On a c5n.18xlarge instance launched with Deep Learning Conda AMI and 512GB disk space, you can generate the preprocessed datasets from WikiCorpus dataset using NVidia's DeepLearningExamples for BERT pretraining. The preprocessing converts the WikiCorpus dataset to tokenized data and shard the data into multiple shards for parallel loading. The full flow takes about 8.7 hours:
.. code:: shell
source activate pytorch_latest_p37
cd ~/
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples
git checkout 81b9010096b6f9812e3977b607669f6ec8b16561
sudo mkdir -m a=rwx /workspace
cp -rf PyTorch/LanguageModeling/BERT /workspace/bert
cd /workspace
git clone https://github.com/attardi/wikiextractor.git
cd wikiextractor
git checkout 6408a430fc504a38b04d37ce5e7fc740191dee16
cd /workspace/bert
# increase num processes and shards
ex -s "+%s/\(bertPrep\.py\)\( --action create_hdf5_files\)/\1 --n_processes 32 --n_test_shards 1024 --n_training_shards 1024\2" "+wq" data/create_datasets_from_start.sh
export BERT_PREP_WORKING_DIR=/workspace/data/
time ./data/create_datasets_from_start.sh wiki_only |& tee log
After execution is finished, phase 1 pre-tokenized and sharded dataset is located at:
``/workspace/data/hdf5_lower_case_1_seq_len_128_max_pred_20_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/wikicorpus_en/``
Copy this entire directory to ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128`` of the trn1.32xlarge machine.
Phase 2 pre-tokenized dataset is located at:
``/workspace/data/hdf5_lower_case_1_seq_len_512_max_pred_80_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/wikicorpus_en/``
Copy this entire directory to ``~/examples_datasets/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen512`` of the trn1.32xlarge machine.
Known issues and limitations
----------------------------
NaNs seen with transformers version >= 4.21.0 when running HF BERT fine-tuning or pretraining with XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When running HuggingFace BERT (any size) fine-tuning tutorial or pretraining tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you will see NaNs in the loss immediately at the first step. More details on the issue can be found at `pytorch/xla#4152 <https://github.com/pytorch/xla/issues/4152>`_. The workaround is to use 4.20.0 or earlier (the tutorials currently recommend version 4.15.0) or add ``transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16`` to the Python script.
BERT-large compilation limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Optimal BERT-large phase 1 (sequence length 128) batch size is currently 8 for FP32 and 16 for full BF16 with stochastic rounding.
Optimal BERT-large phase 2 (sequence length 512) batch size is currently 1 for FP32 and 2 for full BF16 with stochastic rounding.
BERT-large pretraining with pretokenized dataset hangs when using xm.save
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Currently, BERT-large pretraining with pretokenized dataset hangs when
``xm.save`` is used outside of the main training loop.
.. code:: python
Loop through HDF5 sharded dataset files:
Train on one HDF5 sharded dataset file
Loop through batched samples:
Training iteration
Save checkpoint using xm.save
The reason is that xm.save has a synchronization point. However, the
HDF5 shared data files do not have the same number of training samples
so the workers cannot all reach xm.save in the same iteration.
The workaround is to use ``xm._maybe_convert_to_cpu`` to ensure tensors
are moved to CPU followed by ``torch.save`` as done in the BERT-large
pretraining tutorial:
.. code:: python
cpu_data = xm._maybe_convert_to_cpu(data)
BERT-large two worker pretraining hangs or run out of host memory during checkpointing on trn1.2xlarge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On trn1.2xlarge, where there's limited host memory and CPU resources,
the BERT-large two worker pretraining may hang or run out of host memory during
checkpointing. This problem can be worked around by not saving optimizer and
LR scheduler states in the checkpoint. This is enabled by :option:`--minimal_ckpt` option
of the pretraining script.
BERT precompilation using neuron_parallel_compile hangs when using torchrun
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We use neuron_parallel_compile in front of the short run command to do precompilation. However, the following command hangs when running BERT parallel compilation with torchrun:
.. code:: bash
neuron_parallel_compile XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 --nnodes=1 dp_bert_large_hf_pretrain_hdf5.py --steps_this_run 5
...
Updating train metrics in provide results.json file
Current data: {'num_workers': 32, 'epoch': 0, 'steps': 5, 'microsteps': 320, 'loss': -22172234.0, 'train_time_minutes': 0.7424166639645894, 'throughput_average': 1839.0391805624324, 'throughput_peak': 1840.0107059878164, 'batch_size': 8, 'max_length': 128}
Updating with data: {'num_workers': 32, 'epoch': 0, 'steps': 5, 'microsteps': 320, 'loss': -22172234.0, 'train_time_minutes': 0.7826640844345093, 'throughput_average': 1744.4691285659471, 'throughput_peak': 1745.4964663587539, 'batch_size': 8, 'max_length': 128}
Checkpointing...
Checkpointing done...
(hangs)
The fix is to add xm.rendezvous at the end of training to ensure all workers sync up before exiting the script dp_bert_large_pretrain_hdf5.py.
.. code:: python
def _mp_fn(index, flags):
torch.set_default_tensor_type('torch.FloatTensor')
train_bert_hdf5(flags)
xm.rendezvous("_mp_fn finished")
Reduced multi-node performance with Neuron PyTorch 1.12 (release 2.6)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Default BERT performance on multiple instances is reduced with Neuron PyTorch 1.12 (release 2.6). The workaround is to set the XLA flag XLA_TRANSFER_SEED_ASYNC=1.
Troubleshooting
---------------
The following are troubleshooting tips related to this tutorial. See
:ref:`PyTorch Neuron on Trainium Troubleshooting
Guide <pytorch-neuron-traning-troubleshooting>` for additional troubleshooting
tips.
.. _modulenotfounderror-no-module-named-torch--torch_xla-transformers-etc:
ModuleNotFoundError: No module named 'torch' , 'torch_xla', 'transformers', etc
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you encounter 'ModuleNotFoundError' messages while attempting to run
the demo scripts, please ensure that you have activated the appropriate
Python *virtualenv* which contains all of the demo dependencies:
.. code:: bash
cd ~
source <python virtual environment path>/bin/activate
</pre></body></html>
|
2023-09-29T20:54:47.751Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.rst.txt
|
```
Tutorials for Training(torch-neuronx)
=====================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/tutorials/training/bert
/frameworks/torch/torch-neuronx/tutorials/training/mlp
/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer
/frameworks/torch/torch-neuronx/tutorials/training/finetune_t5
/frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2
/frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training
/neuron-customops/tutorials/customop-mlp-training
/neuron-customops/tutorials/customop-mlp-perf-opt
.. include:: /frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Tutorials for Training(torch-neuronx)
=====================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/tutorials/training/bert
/frameworks/torch/torch-neuronx/tutorials/training/mlp
/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer
/frameworks/torch/torch-neuronx/tutorials/training/finetune_t5
/frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2
/frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training
/neuron-customops/tutorials/customop-mlp-training
/neuron-customops/tutorials/customop-mlp-perf-opt
.. include:: /frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.txt
</pre></body></html>
|
2023-09-29T20:54:47.770Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/torch/torch-neuron/torch-neuron.rst.txt
|
```
.. _pytorch-neuron-rn:
PyTorch Neuron (``torch-neuron``) release notes
===============================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the Pytorch-Neuron package.
Known Issues and Limitations - Updated 03/21/2023
-------------------------------------------------
Min & Max Accuracy
~~~~~~~~~~~~~~~~~~
The index outputs of the ``aten::argmin``, ``aten::argmax``, ``aten::min``, and
``aten::max`` operator implementations are sensitive to precision. For models
that contain these operators and have ``float32`` inputs, we recommend using the
``--fp32-cast=matmult --fast-math no-fast-relayout`` compiler option to avoid
numerical imprecision issues. Additionally, the ``aten::min`` and ``aten::max``
operator implementations do not currently support ``int64`` inputs when
``dim=0``. For more information on precision and performance-accuracy tuning,
see :ref:`neuron-cc-training-mixed-precision`.
Python 3.5
~~~~~~~~~~
If you attempt to import torch.neuron from Python 3.5 you will see this error
in 1.1.7.0 - please use Python 3.6 or greater:
.. code-block::
File "/tmp/install_test_env/lib/python3.5/site-packages/torch_neuron/__init__.py", line 29
f'Invalid dependency version torch=={torch.__version__}. '
^
SyntaxError: invalid syntax
- Torchvision has dropped support for Python 3.5
- HuggingFace transformers has dropped support for Python 3.5
Torchvision
~~~~~~~~~~~
When versions of ``torchvision`` and ``torch`` are mismatched, this
can result in exceptions when compiling ``torchvision`` based
models. Specific versions of ``torchvision`` are built against each release
of ``torch``. For example:
- ``torch==1.5.1`` matches ``torchvision==0.6.1``
- ``torch==1.7.1`` matches ``torchvision==0.8.2``
- etc.
Simultaneously installing both ``torch-neuron`` and ``torchvision`` is the
recommended method of correctly resolving versions.
Dynamic Batching
~~~~~~~~~~~~~~~~
Dynamic batching does not work properly for some models that use the
``aten::size`` operator. When this issue occurs, the input batch sizes are not
properly recorded at inference time, resulting in an error such as:
.. code-block:: text
RuntimeError: The size of tensor a (X) must match the size of tensor b (Y) at non-singleton dimension 0.
This error typically occurs when ``aten::size`` operators are partitioned to
CPU. We are investigating a fix for this issue.
PyTorch Neuron release [package ver. 1.*.*.2.9.1.0, SDK ver. 2.13.0]
--------------------------------------------------------------------
Date: 8/28/2023
* Added support for clamp_min/clamp_max ATEN operators.
PyTorch Neuron release [package ver. 1.*.*.2.8.9.0, SDK ver. 2.12.0]
--------------------------------------------------------------------
Date: 7/19/2023
* Minor updates.
PyTorch Neuron release [2.7.10.0]
--------------------------------------------------
Date: 6/14/2023
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support for Python 3.10
Bug fixes
~~~~~~~~~
* torch.pow Operation now correctly handles mismatch between base and exponent data types
PyTorch Neuron release [2.7.1.0]
--------------------------------------------------
Date: 05/1/2023
* Minor updates.
PyTorch Neuron release [2.6.5.0]
--------------------------------------------------
Date: 03/28/2023
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support for ``torch==1.13.1``
* New releases of ``torch-neuron`` no longer include versions for ``torch==1.7`` and ``torch==1.8``
* Added support for Neuron runtime 2.12
* Added support for new operators:
* ``aten::tensordot``
* ``aten::adaptive_avg_pool1d``
* ``aten::prelu``
* ``aten::reflection_pad2d``
* ``aten::baddbmm``
* ``aten::repeat``
* Added a ``separate_weights`` flag to :func:`torch_neuron.trace` to support
models that are larger than 2GB
Bug fixes
~~~~~~~~~
* Fixed ``aten::_convolution`` with grouping for:
* :class:`torch.nn.Conv1d`
* :class:`torch.nn.Conv3d`
* :class:`torch.nn.ConvTranspose2d`
* Fixed ``aten::linear`` to support 1d input tensors
* Fixed an issue where an input could not be directly returned from the network
PyTorch Neuron release [2.5.0.0]
--------------------------------------------------
Date: 11/23/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added PyTorch 1.12 support
* Added Python 3.8 support
* Added new operators support. See :ref:`neuron-cc-ops-pytorch`
* Added support for ``aten::lstm``. See: :ref:`torch_neuron_lstm_support`
* Improved logging:
* Improved error messages for specific compilation failure modes, including out-of-memory errors
* Added a warning to show the code location of ``prim::PythonOp`` operations
* Removed overly-verbose tracing messages
* Added improved error messages for ``neuron-cc`` and ``tensorflow`` dependency issues
* Added more debug information when an invalid dynamic batching configuration is used
* Added new experimental explicit NeuronCore placement API. See: :ref:`torch_neuron_core_placement_api`
* Added new guide for NeuronCore placement. See: :ref:`torch_neuron_core_placement_guide`
* Improved :func:`torch_neuron.trace` performance when using large graphs
* Reduced host memory usage of loaded models in ``libtorchneuron.so``
* Added ``single_fusion_ratio_threshold`` argument to :func:`torch_neuron.trace`
to give more fine-grained control of partitioned graphs
Bug fixes
~~~~~~~~~
* Improved handling of tensor mutations which previously caused accuracy issues on certain models (i.e. yolor, yolov5)
* Fixed an issue where ``inf`` and ``-inf`` values would cause unexpected ``NaN`` values. This could occur with newer versions of ``transformers``
* Fixed an issue where :func:`torch.neuron.DataParallel` would not fully utilize all NeuronCores for specific batch sizes
* Fixed and improved operators:
* ``aten::upsample_bilinear2d``: Improved error messages in cases where the operation cannot be supported
* ``aten::_convolution``: Added support for ``output_padding`` argument
* ``aten::div``: Added support for ``rounding_mode`` argument
* ``aten::sum``: Fixed to handle non-numeric data types
* ``aten::expand``: Fixed to handle scalar tensors
* ``aten::permute``: Fixed to handle negative indices
* ``aten::min``: Fixed to support more input types
* ``aten::max``: Fixed to support more input types
* ``aten::max_pool2d``: Fixed to support both 3-dimensional and 4-dimensional input tensors
* ``aten::Int``: Fixed an issue where long values would incorrectly lose precision
* ``aten::constant_pad_nd``: Fixed to correctly use non-0 padding values
* ``aten::pow``: Fixed to support more input types & values
* ``aten::avg_pool2d``: Added support for ``count_include_pad`` argument. Added support for ``ceil_mode`` argument if padding isn’t specified
* ``aten::zero``: Fixed to handle scalars correctly
* ``prim::Constant``: Fixed an issue where ``-inf`` was incorrectly handled
* Improved handling of scalars in arithmetic operators
PyTorch Neuron release [2.3.0.0]
--------------------------------------------------
Date: 04/29/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support PyTorch 1.11.
* Updated PyTorch 1.10 to version 1.10.2.
* End of support for torch-neuron 1.5, see :ref:`eol-pt-15`.
* Added support for new operators:
* ``aten::masked_fill_``
* ``aten::new_zeros``
* ``aten::frobenius_norm``
Bug fixes
~~~~~~~~~
* Improved ``aten::gelu`` accuracy
* Updated ``aten::meshgrid`` to support optional indexing argument introduced in ``torch 1.10`` , see `PyTorch issue 50276 <https://github.com/pytorch/pytorch/issues/50276>`_
PyTorch Neuron release [2.2.0.0]
--------------------------------------------------
Date: 03/25/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added full support for ``aten::max_pool2d_with_indices`` - (Was previously supported only when indices were unused).
* Added new torch-neuron packages compiled with ``-D_GLIBCXX_USE_CXX11_ABI=1``, the new packages support PyTorch 1.8, PyTorch 1.9, and PyTorch 1.10.
To install the additional packages compiled with ``-D_GLIBCXX_USE_CXX11_ABI=1`` please change the package repo index to ``https://pip.repos.neuron.amazonaws.com (https://pip.repos.neuron.amazonaws.com/)/cxx11/``
PyTorch Neuron release [2.1.7.0]
--------------------------------------------------
Date: 01/20/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added PyTorch 1.10 support
* Added new operators support, see :ref:`neuron-cc-ops-pytorch`
* Updated ``aten::_convolution`` to support 2d group convolution
* Updated ``neuron::forward`` operators to allocate less dynamic memory. This can increase performance on models with many input & output tensors.
* Updated ``neuron::forward`` to better handle batch sizes when ``dynamic_batch_size=True``. This can increase performance at
inference time when the input batch size is exactly equal to the traced model batch size.
Bug fixes
~~~~~~~~~
* Added the ability to ``torch.jit.trace`` a ``torch.nn.Module`` where a submodule has already been traced with :func:`torch_neuron.trace` on a CPU-type instance.
Previously, if this had been executed on a CPU-type instance, an initialization exception would have been thrown.
* Fixed ``aten::matmul`` behavior on 1-dimensional by n-dimensional multiplies. Previously, this would cause a validation error.
* Fixed binary operator type promotion. Previously, in unusual situations, operators like ``aten::mul`` could produce incorrect results due to invalid casting.
* Fixed ``aten::select`` when index was -1. Previously, this would cause a validation error.
* Fixed ``aten::adaptive_avg_pool2d`` padding and striding behavior. Previously, this could generate incorrect results with specific configurations.
* Fixed an issue where dictionary inputs could be incorrectly traced when the tensor values had gradients.
PyTorch Neuron release [2.0.536.0]
--------------------------------------------------
Date: 01/05/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added new operator support for specific variants of operations (See :ref:`neuron-cc-ops-pytorch`)
* Added optional ``optimizations`` keyword to :func:`torch_neuron.trace` which accepts a list of :class:`~torch_neuron.Optimization` passes.
PyTorch Neuron release [2.0.468.0]
--------------------------------------------------
Date: 12/15/2021
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support for ``aten::cumsum`` operation.
* Fixed ``aten::expand`` to correctly handle adding new dimensions.
PyTorch Neuron release [2.0.392.0]
--------------------------------------------------
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
PyTorch Neuron release [2.0.318.0]
--------------------------------------------------
Date: 10/27/2021
New in this release
~~~~~~~~~~~~~~~~~~~
- PyTorch Neuron 1.x now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
- Introducing PyTorch 1.9.1 support (support for ``torch==1.9.1)``
- Added ``torch_neuron.DataParallel``, see ResNet-50 tutorial :ref:`[html] </src/examples/pytorch/resnet50.ipynb>` and
:ref:`torch-neuron-dataparallel-app-note` application note.
- Added support for tracing on GPUs
- Added support for ``ConvTranspose1d``
- Added support for new operators:
- ``aten::empty_like``
- ``aten::log``
- ``aten::type_as``
- ``aten::movedim``
- ``aten::einsum``
- ``aten::argmax``
- ``aten::min``
- ``aten::argmin``
- ``aten::abs``
- ``aten::cos``
- ``aten::sin``
- ``aten::linear``
- ``aten::pixel_shuffle``
- ``aten::group_norm``
- ``aten::_weight_norm``
- Added ``torch_neuron.is_available()``
Resolved Issues
~~~~~~~~~~~~~~~
- Fixed a performance issue when using both the
``dynamic_batch_size=True`` trace option and
``--neuron-core-pipeline`` compiler option. Dynamic batching now uses
``OpenMP`` to execute pipeline batches concurrently.
- Fixed ``torch_neuron.trace`` issues:
- Fixed a failure when the same submodule was traced with multiple
inputs
- Fixed a failure where some operations would fail to be called with
the correct arguments
- Fixed a failure where custom operators (torch plugins) would cause
a trace failure
- Fixed variants of ``aten::upsample_bilinear2d`` when
``scale_factor=1``
- Fixed variants of ``aten::expand`` using ``dim=-1``
- Fixed variants of ``aten::stack`` using multiple different input data
types
- Fixed variants of ``aten::max`` using indices outputs
[1.8.1.1.5.21.0]
--------------------------------------------------
Date: 08/12/2021
Summary
~~~~~~~
- Minor updates.
.. _neuron-torch-1570:
[1.8.1.1.5.7.0]
--------------------------------------------------
Date: 07/02/2021
Summary
~~~~~~~
- Added support for dictionary outputs using ``strict=False`` flag. See
:ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/troubleshooting-guide.rst`.
- Updated ``aten::batch_norm`` to correctly implement the ``affine`` flag.
- Added support for ``aten::erf`` and ``prim::DictConstruct``. See
:ref:`neuron-cc-ops-pytorch`.
- Added dynamic batch support. See
:ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/api-compilation-python-api.rst`.
.. _neuron-torch-1410:
[1.8.1.1.4.1.0]
--------------------------------------------------
Date: 5/28/2021
Summary
~~~~~~~~
* Added support for PyTorch 1.8.1
* Models compatibility
* Models compiled with previous versions of PyTorch Neuron (<1.8.1) are compatible with PyTorch Neuron 1.8.1.
* Models compiled with PyTorch Neuron 1.8.1 are not backward compatible with previous versions of PyTorch Neuron (<1.8.1) .
* Updated tutorials to use Hugging Face Transformers 4.6.0.
* Added a new set of forward operators (forward_v2)
* Host memory allocation when loading the same model on multiple NeuronCores is significantly reduced
* Fixed an issue where models would not deallocate all memory within a python session after being garbage collected.
* Fixed a TorchScript/C++ issue where loading the same model multiple times would not use multiple NeuronCores by default.
* Fixed logging to no longer configure the root logger.
* Removed informative messages that were produced during compilations as warnings. The number of warnings reduced significantly.
* Convolution operator support has been extended to include ConvTranspose2d variants.
* Reduce the amount of host memory usage during inference.
.. _neuron-torch-1350:
[1.7.1.1.3.5.0]
--------------------------------------------------
Date: 4/30/2021
Summary
~~~~~~~
- ResNext models now functional with new operator support
- Yolov5 support refer to https://github.com/aws/aws-neuron-sdk/issues/253 note https://github.com/ultralytics/yolov5/pull/2953 which optimized YoloV5 for AWS Neuron
- Convolution operator support has been extended to include most Conv1d and Conv3d variants
- New operator support. Please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators.
.. _neuron-torch-12160:
[1.7.1.1.2.16.0]
--------------------------------------------------
Date: 3/4/2021
Summary
~~~~~~~~
- Minor enhancements.
.. _neuron-torch-12150:
[1.7.1.1.2.15.0]
--------------------------------------------------
Date: 2/24/2021
Summary
~~~~~~~
- Fix for CVE-2021-3177.
.. _neuron-torch-1230:
[1.7.1.1.2.3.0]
--------------------------------------------------
Date: 1/30/2021
Summary
~~~~~~~~
- Made changes to allow models with -inf scalar constants to correctly compile
- Added new operator support. Please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators.
.. _neuron-torch-11170:
[1.1.7.0]
--------------------------------------------------
Date: 12/23/2020
Summary
~~~~~~~~
- We are dropping support for Python 3.5 in this release
- torch.neuron.trace behavior will now throw a RuntimeError in the case that no operators are compiled for neuron hardware
- torch.neuron.trace will now display compilation progress indicators (dots) as default behavior (neuron-cc must updated to the December release to greater to see this feature)
- Added new operator support. Please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators.
- Extended the BERT pretrained tutorial to demonstrate execution on multiple cores and batch modification, updated the tutorial to accomodate changes in the Hugging Face Transformers code for version 4.0
- Added a tutorial for torch-serve which extends the BERT tutorial
- Added support for PyTorch 1.7
.. _neuron-torch-1019780:
[1.0.1978.0]
--------------------------------------------------
Date: 11/17/2020
Summary
~~~~~~~
- Fixed bugs in comparison operators, and added remaining variantes
(eq, ne, gt, ge, lt, le)
- Added support for prim::PythonOp - note that this must be run on CPU
and not Neuron. We recommend you replace this code with PyTorch
operators if possible
- Support for a series of new operators. Please see :ref:`neuron-cc-ops-pytorch` for the
complete list of operators.
- Performance improvements to the runtime library
- Correction of a runtime library bug which caused models with large
tensors to generate incorrect results in some cases
.. _neuron-torch-1017210:
[1.0.1721.0]
--------------------------------------------------
Date: 09/22/2020
Summary
~~~~~~~
- Various minor improvements to the Pytorch autopartitioner feature
- Support for the operators aten::constant_pad_nd, aten::meshgrid
- Improved performance on various torchvision models. Of note are
resnet50 and vgg16
.. _neuron-torch-1015320:
[1.0.1532.0]
--------------------------------------------------
Date: 08/08/2020
.. _summary-1:
Summary
~~~~~~~
- Various minor improvements to the Pytorch autopartitioner feature
- Support for the aten:ones operator
.. _neuron-torch-1015220:
[1.0.1522.0]
--------------------------------------------------
Date: 08/05/2020
.. _summary-2:
Summary
~~~~~~~~
Various minor improvements.
.. _neuron-torch-1013860:
[1.0.1386.0]
--------------------------------------------------
Date: 07/16/2020
.. _summary-3:
Summary
~~~~~~~
This release adds auto-partitioning, model analysis and PyTorch 1.5.1
support, along with a number of new operators
Major New Features
~~~~~~~~~~~~~~~~~~
- Support for Pytorch 1.5.1
- Introduce an automated operator device placement mechanism in
torch.neuron.trace to run sub-graphs that contain operators that are
not supported by the neuron compiler in native PyTorch. This new
mechanism is on by default and can be turned off by adding argument
fallback=False to the compiler arguments.
- Model analysis to find supported and unsupported operators in a model
Resolved Issues
~~~~~~~~~~~~~~~~
.. _neuron-torch-1011680:
[1.0.1168.0]
--------------------------------------------------
Date 6/11/2020
.. _summary-4:
Summary
~~~~~~~
.. _major-new-features-1:
Major New Features
~~~~~~~~~~~~~~~~~~
.. _resolved-issues-1:
Resolved Issues
~~~~~~~~~~~~~~~
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _neuron-torch-1010010:
[1.0.1001.0]
--------------------------------------------------
Date: 5/11/2020
.. _summary-5:
Summary
~~~~~~~~
Additional PyTorch operator support and improved support for model
saving and reloading.
.. _major-new-features-2:
Major New Features
~~~~~~~~~~~~~~~~~~
- Added Neuron Compiler support for a number of previously unsupported
PyTorch operators. Please see :ref:`neuron-cc-ops-pytorch`for the
complete list of operators.
- Add support for torch.neuron.trace on models which have previously
been saved using torch.jit.save and then reloaded.
.. _resolved-issues-2:
Resolved Issues
~~~~~~~~~~~~~~~~
.. _known-issues-and-limitations-1:
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _neuron-torch-108250:
[1.0.825.0]
--------------------------------------------------
Date: 3/26/2020
.. _summary-6:
Summary
~~~~~~~
.. _major-new-features-3:
Major New Features
~~~~~~~~~~~~~~~~~
.. _resolved-issues-3:
Resolved Issues
~~~~~~~~~~~~~~~
.. _known-issues-and-limitations-2:
Known Issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _neuron-torch-107630:
[1.0.763.0]
--------------------------------------------------
Date: 2/27/2020
.. _summary-7:
Summary
~~~~~~~
Added Neuron Compiler support for a number of previously unsupported
PyTorch operators. Please see :ref:`neuron-cc-ops-pytorch` for the complete
list of operators.
.. _major-new-features-4:
Major new features
~~~~~~~~~~~~~~~~~~
- None
.. _resolved-issues-4:
Resolved issues
~~~~~~~~~~~~~~~~~
- None
.. _neuron-torch-106720:
[1.0.672.0]
--------------------------------------------------
Date: 1/27/2020
.. _summary-8:
Summary
~~~~~~~~
.. _major-new-features-5:
Major new features
~~~~~~~~~~~~~~~~~~
.. _resolved-issues-5:
Resolved issues
~~~~~~~~~~~~~~~~
- Python 3.5 and Python 3.7 are now supported.
.. _known-issues-and-limitations-3:
Known issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Other Notes
~~~~~~~~~~~
.. _neuron-torch-106270:
[1.0.627.0]
--------------------------------------------------
Date: 12/20/2019
.. _summary-9:
Summary
~~~~~~~~
This is the initial release of torch-neuron. It is not distributed on
the DLAMI yet and needs to be installed from the neuron pip repository.
Note that we are currently using a TensorFlow as an intermediate format
to pass to our compiler. This does not affect any runtime execution from
PyTorch to Neuron Runtime and Inferentia. This is why the neuron-cc
installation must include [tensorflow] for PyTorch.
.. _major-new-features-6:
Major new features
~~~~~~~~~~~~~~~~~~
.. _resolved-issues-6:
Resolved issues
~~~~~~~~~~~~~~~
.. _known-issues-and-limitations-4:
Known issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Models TESTED
~~~~~~~~~~~~~~
The following models have successfully run on neuron-inferentia systems
1. SqueezeNet
2. ResNet50
3. Wide ResNet50
Pytorch Serving
~~~~~~~~~~~~~~~
In this initial version there is no specific serving support. Inference
works correctly through Python on Inf1 instances using the neuron
runtime. Future releases will include support for production deployment
and serving of models
Profiler support
~~~~~~~~~~~~~~~~
Profiler support is not provided in this initial release and will be
available in future releases
Automated partitioning
~~~~~~~~~~~~~~~~~~~~~~
Automatic partitioning of graphs into supported and non-supported
operations is not currently supported. A tutorial is available to
provide guidance on how to manually parition a model graph. Please see
:ref:`pytorch-manual-partitioning-jn-tutorial`
PyTorch dependency
~~~~~~~~~~~~~~~~~~
Currently PyTorch support depends on a Neuron specific version of
PyTorch v1.3.1. Future revisions will add support for 1.4 and future
releases.
Trace behavior
~~~~~~~~~~~~~~
In order to trace a model it must be in evaluation mode. For examples
please see :ref:`/src/examples/pytorch/resnet50.ipynb`
Six pip package is required
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Six package is required for the torch-neuron runtime, but it is not
modeled in the package dependencies. This will be fixed in a future
release.
Multiple NeuronCore support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the num-neuroncores options is used the number of cores must be
manually set in the calling shell environment variable for compilation
and inference.
For example: Using the keyword argument
compiler_args=['—num-neuroncores', '4'] in the trace call, requires
NEURONCORE_GROUP_SIZES=4 to be set in the environment at compile time
and runtime
CPU execution
~~~~~~~~~~~~~~
At compilation time a constant output is generated for the purposes of
tracing. Running inference on a non neuron instance will generate
incorrect results. This must not be used. The following error message is
generated to stderr:
::
Warning: Tensor output are ** NOT CALCULATED ** during CPU execution and only
indicate tensor shape
.. _other-notes-1:
Other notes
~~~~~~~~~~~
- Python version(s) supported:
- 3.6
- Linux distribution supported:
- DLAMI Ubuntu 18 and Amazon Linux 2 (using Python 3.6 Conda environments)
- Other AMIs based on Ubuntu 18
- For Amazon Linux 2 please install Conda and use Python 3.6 Conda
environment
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuron-rn:
PyTorch Neuron (``torch-neuron``) release notes
===============================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the Pytorch-Neuron package.
Known Issues and Limitations - Updated 03/21/2023
-------------------------------------------------
Min & Max Accuracy
~~~~~~~~~~~~~~~~~~
The index outputs of the ``aten::argmin``, ``aten::argmax``, ``aten::min``, and
``aten::max`` operator implementations are sensitive to precision. For models
that contain these operators and have ``float32`` inputs, we recommend using the
``--fp32-cast=matmult --fast-math no-fast-relayout`` compiler option to avoid
numerical imprecision issues. Additionally, the ``aten::min`` and ``aten::max``
operator implementations do not currently support ``int64`` inputs when
``dim=0``. For more information on precision and performance-accuracy tuning,
see :ref:`neuron-cc-training-mixed-precision`.
Python 3.5
~~~~~~~~~~
If you attempt to import torch.neuron from Python 3.5 you will see this error
in 1.1.7.0 - please use Python 3.6 or greater:
.. code-block::
File "/tmp/install_test_env/lib/python3.5/site-packages/torch_neuron/__init__.py", line 29
f'Invalid dependency version torch=={torch.__version__}. '
^
SyntaxError: invalid syntax
- Torchvision has dropped support for Python 3.5
- HuggingFace transformers has dropped support for Python 3.5
Torchvision
~~~~~~~~~~~
When versions of ``torchvision`` and ``torch`` are mismatched, this
can result in exceptions when compiling ``torchvision`` based
models. Specific versions of ``torchvision`` are built against each release
of ``torch``. For example:
- ``torch==1.5.1`` matches ``torchvision==0.6.1``
- ``torch==1.7.1`` matches ``torchvision==0.8.2``
- etc.
Simultaneously installing both ``torch-neuron`` and ``torchvision`` is the
recommended method of correctly resolving versions.
Dynamic Batching
~~~~~~~~~~~~~~~~
Dynamic batching does not work properly for some models that use the
``aten::size`` operator. When this issue occurs, the input batch sizes are not
properly recorded at inference time, resulting in an error such as:
.. code-block:: text
RuntimeError: The size of tensor a (X) must match the size of tensor b (Y) at non-singleton dimension 0.
This error typically occurs when ``aten::size`` operators are partitioned to
CPU. We are investigating a fix for this issue.
PyTorch Neuron release [package ver. 1.*.*.2.9.1.0, SDK ver. 2.13.0]
--------------------------------------------------------------------
Date: 8/28/2023
* Added support for clamp_min/clamp_max ATEN operators.
PyTorch Neuron release [package ver. 1.*.*.2.8.9.0, SDK ver. 2.12.0]
--------------------------------------------------------------------
Date: 7/19/2023
* Minor updates.
PyTorch Neuron release [2.7.10.0]
--------------------------------------------------
Date: 6/14/2023
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support for Python 3.10
Bug fixes
~~~~~~~~~
* torch.pow Operation now correctly handles mismatch between base and exponent data types
PyTorch Neuron release [2.7.1.0]
--------------------------------------------------
Date: 05/1/2023
* Minor updates.
PyTorch Neuron release [2.6.5.0]
--------------------------------------------------
Date: 03/28/2023
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support for ``torch==1.13.1``
* New releases of ``torch-neuron`` no longer include versions for ``torch==1.7`` and ``torch==1.8``
* Added support for Neuron runtime 2.12
* Added support for new operators:
* ``aten::tensordot``
* ``aten::adaptive_avg_pool1d``
* ``aten::prelu``
* ``aten::reflection_pad2d``
* ``aten::baddbmm``
* ``aten::repeat``
* Added a ``separate_weights`` flag to :func:`torch_neuron.trace` to support
models that are larger than 2GB
Bug fixes
~~~~~~~~~
* Fixed ``aten::_convolution`` with grouping for:
* :class:`torch.nn.Conv1d`
* :class:`torch.nn.Conv3d`
* :class:`torch.nn.ConvTranspose2d`
* Fixed ``aten::linear`` to support 1d input tensors
* Fixed an issue where an input could not be directly returned from the network
PyTorch Neuron release [2.5.0.0]
--------------------------------------------------
Date: 11/23/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added PyTorch 1.12 support
* Added Python 3.8 support
* Added new operators support. See :ref:`neuron-cc-ops-pytorch`
* Added support for ``aten::lstm``. See: :ref:`torch_neuron_lstm_support`
* Improved logging:
* Improved error messages for specific compilation failure modes, including out-of-memory errors
* Added a warning to show the code location of ``prim::PythonOp`` operations
* Removed overly-verbose tracing messages
* Added improved error messages for ``neuron-cc`` and ``tensorflow`` dependency issues
* Added more debug information when an invalid dynamic batching configuration is used
* Added new experimental explicit NeuronCore placement API. See: :ref:`torch_neuron_core_placement_api`
* Added new guide for NeuronCore placement. See: :ref:`torch_neuron_core_placement_guide`
* Improved :func:`torch_neuron.trace` performance when using large graphs
* Reduced host memory usage of loaded models in ``libtorchneuron.so``
* Added ``single_fusion_ratio_threshold`` argument to :func:`torch_neuron.trace`
to give more fine-grained control of partitioned graphs
Bug fixes
~~~~~~~~~
* Improved handling of tensor mutations which previously caused accuracy issues on certain models (i.e. yolor, yolov5)
* Fixed an issue where ``inf`` and ``-inf`` values would cause unexpected ``NaN`` values. This could occur with newer versions of ``transformers``
* Fixed an issue where :func:`torch.neuron.DataParallel` would not fully utilize all NeuronCores for specific batch sizes
* Fixed and improved operators:
* ``aten::upsample_bilinear2d``: Improved error messages in cases where the operation cannot be supported
* ``aten::_convolution``: Added support for ``output_padding`` argument
* ``aten::div``: Added support for ``rounding_mode`` argument
* ``aten::sum``: Fixed to handle non-numeric data types
* ``aten::expand``: Fixed to handle scalar tensors
* ``aten::permute``: Fixed to handle negative indices
* ``aten::min``: Fixed to support more input types
* ``aten::max``: Fixed to support more input types
* ``aten::max_pool2d``: Fixed to support both 3-dimensional and 4-dimensional input tensors
* ``aten::Int``: Fixed an issue where long values would incorrectly lose precision
* ``aten::constant_pad_nd``: Fixed to correctly use non-0 padding values
* ``aten::pow``: Fixed to support more input types & values
* ``aten::avg_pool2d``: Added support for ``count_include_pad`` argument. Added support for ``ceil_mode`` argument if padding isn’t specified
* ``aten::zero``: Fixed to handle scalars correctly
* ``prim::Constant``: Fixed an issue where ``-inf`` was incorrectly handled
* Improved handling of scalars in arithmetic operators
PyTorch Neuron release [2.3.0.0]
--------------------------------------------------
Date: 04/29/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support PyTorch 1.11.
* Updated PyTorch 1.10 to version 1.10.2.
* End of support for torch-neuron 1.5, see :ref:`eol-pt-15`.
* Added support for new operators:
* ``aten::masked_fill_``
* ``aten::new_zeros``
* ``aten::frobenius_norm``
Bug fixes
~~~~~~~~~
* Improved ``aten::gelu`` accuracy
* Updated ``aten::meshgrid`` to support optional indexing argument introduced in ``torch 1.10`` , see `PyTorch issue 50276 <https://github.com/pytorch/pytorch/issues/50276>`_
PyTorch Neuron release [2.2.0.0]
--------------------------------------------------
Date: 03/25/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added full support for ``aten::max_pool2d_with_indices`` - (Was previously supported only when indices were unused).
* Added new torch-neuron packages compiled with ``-D_GLIBCXX_USE_CXX11_ABI=1``, the new packages support PyTorch 1.8, PyTorch 1.9, and PyTorch 1.10.
To install the additional packages compiled with ``-D_GLIBCXX_USE_CXX11_ABI=1`` please change the package repo index to ``https://pip.repos.neuron.amazonaws.com (https://pip.repos.neuron.amazonaws.com/)/cxx11/``
PyTorch Neuron release [2.1.7.0]
--------------------------------------------------
Date: 01/20/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added PyTorch 1.10 support
* Added new operators support, see :ref:`neuron-cc-ops-pytorch`
* Updated ``aten::_convolution`` to support 2d group convolution
* Updated ``neuron::forward`` operators to allocate less dynamic memory. This can increase performance on models with many input & output tensors.
* Updated ``neuron::forward`` to better handle batch sizes when ``dynamic_batch_size=True``. This can increase performance at
inference time when the input batch size is exactly equal to the traced model batch size.
Bug fixes
~~~~~~~~~
* Added the ability to ``torch.jit.trace`` a ``torch.nn.Module`` where a submodule has already been traced with :func:`torch_neuron.trace` on a CPU-type instance.
Previously, if this had been executed on a CPU-type instance, an initialization exception would have been thrown.
* Fixed ``aten::matmul`` behavior on 1-dimensional by n-dimensional multiplies. Previously, this would cause a validation error.
* Fixed binary operator type promotion. Previously, in unusual situations, operators like ``aten::mul`` could produce incorrect results due to invalid casting.
* Fixed ``aten::select`` when index was -1. Previously, this would cause a validation error.
* Fixed ``aten::adaptive_avg_pool2d`` padding and striding behavior. Previously, this could generate incorrect results with specific configurations.
* Fixed an issue where dictionary inputs could be incorrectly traced when the tensor values had gradients.
PyTorch Neuron release [2.0.536.0]
--------------------------------------------------
Date: 01/05/2022
New in this release
~~~~~~~~~~~~~~~~~~~
* Added new operator support for specific variants of operations (See :ref:`neuron-cc-ops-pytorch`)
* Added optional ``optimizations`` keyword to :func:`torch_neuron.trace` which accepts a list of :class:`~torch_neuron.Optimization` passes.
PyTorch Neuron release [2.0.468.0]
--------------------------------------------------
Date: 12/15/2021
New in this release
~~~~~~~~~~~~~~~~~~~
* Added support for ``aten::cumsum`` operation.
* Fixed ``aten::expand`` to correctly handle adding new dimensions.
PyTorch Neuron release [2.0.392.0]
--------------------------------------------------
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
PyTorch Neuron release [2.0.318.0]
--------------------------------------------------
Date: 10/27/2021
New in this release
~~~~~~~~~~~~~~~~~~~
- PyTorch Neuron 1.x now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
- Introducing PyTorch 1.9.1 support (support for ``torch==1.9.1)``
- Added ``torch_neuron.DataParallel``, see ResNet-50 tutorial :ref:`[html] </src/examples/pytorch/resnet50.ipynb>` and
:ref:`torch-neuron-dataparallel-app-note` application note.
- Added support for tracing on GPUs
- Added support for ``ConvTranspose1d``
- Added support for new operators:
- ``aten::empty_like``
- ``aten::log``
- ``aten::type_as``
- ``aten::movedim``
- ``aten::einsum``
- ``aten::argmax``
- ``aten::min``
- ``aten::argmin``
- ``aten::abs``
- ``aten::cos``
- ``aten::sin``
- ``aten::linear``
- ``aten::pixel_shuffle``
- ``aten::group_norm``
- ``aten::_weight_norm``
- Added ``torch_neuron.is_available()``
Resolved Issues
~~~~~~~~~~~~~~~
- Fixed a performance issue when using both the
``dynamic_batch_size=True`` trace option and
``--neuron-core-pipeline`` compiler option. Dynamic batching now uses
``OpenMP`` to execute pipeline batches concurrently.
- Fixed ``torch_neuron.trace`` issues:
- Fixed a failure when the same submodule was traced with multiple
inputs
- Fixed a failure where some operations would fail to be called with
the correct arguments
- Fixed a failure where custom operators (torch plugins) would cause
a trace failure
- Fixed variants of ``aten::upsample_bilinear2d`` when
``scale_factor=1``
- Fixed variants of ``aten::expand`` using ``dim=-1``
- Fixed variants of ``aten::stack`` using multiple different input data
types
- Fixed variants of ``aten::max`` using indices outputs
[1.8.1.1.5.21.0]
--------------------------------------------------
Date: 08/12/2021
Summary
~~~~~~~
- Minor updates.
.. _neuron-torch-1570:
[1.8.1.1.5.7.0]
--------------------------------------------------
Date: 07/02/2021
Summary
~~~~~~~
- Added support for dictionary outputs using ``strict=False`` flag. See
:ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/troubleshooting-guide.rst`.
- Updated ``aten::batch_norm`` to correctly implement the ``affine`` flag.
- Added support for ``aten::erf`` and ``prim::DictConstruct``. See
:ref:`neuron-cc-ops-pytorch`.
- Added dynamic batch support. See
:ref:`/neuron-guide/neuron-frameworks/pytorch-neuron/api-compilation-python-api.rst`.
.. _neuron-torch-1410:
[1.8.1.1.4.1.0]
--------------------------------------------------
Date: 5/28/2021
Summary
~~~~~~~~
* Added support for PyTorch 1.8.1
* Models compatibility
* Models compiled with previous versions of PyTorch Neuron (<1.8.1) are compatible with PyTorch Neuron 1.8.1.
* Models compiled with PyTorch Neuron 1.8.1 are not backward compatible with previous versions of PyTorch Neuron (<1.8.1) .
* Updated tutorials to use Hugging Face Transformers 4.6.0.
* Added a new set of forward operators (forward_v2)
* Host memory allocation when loading the same model on multiple NeuronCores is significantly reduced
* Fixed an issue where models would not deallocate all memory within a python session after being garbage collected.
* Fixed a TorchScript/C++ issue where loading the same model multiple times would not use multiple NeuronCores by default.
* Fixed logging to no longer configure the root logger.
* Removed informative messages that were produced during compilations as warnings. The number of warnings reduced significantly.
* Convolution operator support has been extended to include ConvTranspose2d variants.
* Reduce the amount of host memory usage during inference.
.. _neuron-torch-1350:
[1.7.1.1.3.5.0]
--------------------------------------------------
Date: 4/30/2021
Summary
~~~~~~~
- ResNext models now functional with new operator support
- Yolov5 support refer to https://github.com/aws/aws-neuron-sdk/issues/253 note https://github.com/ultralytics/yolov5/pull/2953 which optimized YoloV5 for AWS Neuron
- Convolution operator support has been extended to include most Conv1d and Conv3d variants
- New operator support. Please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators.
.. _neuron-torch-12160:
[1.7.1.1.2.16.0]
--------------------------------------------------
Date: 3/4/2021
Summary
~~~~~~~~
- Minor enhancements.
.. _neuron-torch-12150:
[1.7.1.1.2.15.0]
--------------------------------------------------
Date: 2/24/2021
Summary
~~~~~~~
- Fix for CVE-2021-3177.
.. _neuron-torch-1230:
[1.7.1.1.2.3.0]
--------------------------------------------------
Date: 1/30/2021
Summary
~~~~~~~~
- Made changes to allow models with -inf scalar constants to correctly compile
- Added new operator support. Please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators.
.. _neuron-torch-11170:
[1.1.7.0]
--------------------------------------------------
Date: 12/23/2020
Summary
~~~~~~~~
- We are dropping support for Python 3.5 in this release
- torch.neuron.trace behavior will now throw a RuntimeError in the case that no operators are compiled for neuron hardware
- torch.neuron.trace will now display compilation progress indicators (dots) as default behavior (neuron-cc must updated to the December release to greater to see this feature)
- Added new operator support. Please see :ref:`neuron-cc-ops-pytorch` for the complete list of operators.
- Extended the BERT pretrained tutorial to demonstrate execution on multiple cores and batch modification, updated the tutorial to accomodate changes in the Hugging Face Transformers code for version 4.0
- Added a tutorial for torch-serve which extends the BERT tutorial
- Added support for PyTorch 1.7
.. _neuron-torch-1019780:
[1.0.1978.0]
--------------------------------------------------
Date: 11/17/2020
Summary
~~~~~~~
- Fixed bugs in comparison operators, and added remaining variantes
(eq, ne, gt, ge, lt, le)
- Added support for prim::PythonOp - note that this must be run on CPU
and not Neuron. We recommend you replace this code with PyTorch
operators if possible
- Support for a series of new operators. Please see :ref:`neuron-cc-ops-pytorch` for the
complete list of operators.
- Performance improvements to the runtime library
- Correction of a runtime library bug which caused models with large
tensors to generate incorrect results in some cases
.. _neuron-torch-1017210:
[1.0.1721.0]
--------------------------------------------------
Date: 09/22/2020
Summary
~~~~~~~
- Various minor improvements to the Pytorch autopartitioner feature
- Support for the operators aten::constant_pad_nd, aten::meshgrid
- Improved performance on various torchvision models. Of note are
resnet50 and vgg16
.. _neuron-torch-1015320:
[1.0.1532.0]
--------------------------------------------------
Date: 08/08/2020
.. _summary-1:
Summary
~~~~~~~
- Various minor improvements to the Pytorch autopartitioner feature
- Support for the aten:ones operator
.. _neuron-torch-1015220:
[1.0.1522.0]
--------------------------------------------------
Date: 08/05/2020
.. _summary-2:
Summary
~~~~~~~~
Various minor improvements.
.. _neuron-torch-1013860:
[1.0.1386.0]
--------------------------------------------------
Date: 07/16/2020
.. _summary-3:
Summary
~~~~~~~
This release adds auto-partitioning, model analysis and PyTorch 1.5.1
support, along with a number of new operators
Major New Features
~~~~~~~~~~~~~~~~~~
- Support for Pytorch 1.5.1
- Introduce an automated operator device placement mechanism in
torch.neuron.trace to run sub-graphs that contain operators that are
not supported by the neuron compiler in native PyTorch. This new
mechanism is on by default and can be turned off by adding argument
fallback=False to the compiler arguments.
- Model analysis to find supported and unsupported operators in a model
Resolved Issues
~~~~~~~~~~~~~~~~
.. _neuron-torch-1011680:
[1.0.1168.0]
--------------------------------------------------
Date 6/11/2020
.. _summary-4:
Summary
~~~~~~~
.. _major-new-features-1:
Major New Features
~~~~~~~~~~~~~~~~~~
.. _resolved-issues-1:
Resolved Issues
~~~~~~~~~~~~~~~
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _neuron-torch-1010010:
[1.0.1001.0]
--------------------------------------------------
Date: 5/11/2020
.. _summary-5:
Summary
~~~~~~~~
Additional PyTorch operator support and improved support for model
saving and reloading.
.. _major-new-features-2:
Major New Features
~~~~~~~~~~~~~~~~~~
- Added Neuron Compiler support for a number of previously unsupported
PyTorch operators. Please see :ref:`neuron-cc-ops-pytorch`for the
complete list of operators.
- Add support for torch.neuron.trace on models which have previously
been saved using torch.jit.save and then reloaded.
.. _resolved-issues-2:
Resolved Issues
~~~~~~~~~~~~~~~~
.. _known-issues-and-limitations-1:
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _neuron-torch-108250:
[1.0.825.0]
--------------------------------------------------
Date: 3/26/2020
.. _summary-6:
Summary
~~~~~~~
.. _major-new-features-3:
Major New Features
~~~~~~~~~~~~~~~~~
.. _resolved-issues-3:
Resolved Issues
~~~~~~~~~~~~~~~
.. _known-issues-and-limitations-2:
Known Issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _neuron-torch-107630:
[1.0.763.0]
--------------------------------------------------
Date: 2/27/2020
.. _summary-7:
Summary
~~~~~~~
Added Neuron Compiler support for a number of previously unsupported
PyTorch operators. Please see :ref:`neuron-cc-ops-pytorch` for the complete
list of operators.
.. _major-new-features-4:
Major new features
~~~~~~~~~~~~~~~~~~
- None
.. _resolved-issues-4:
Resolved issues
~~~~~~~~~~~~~~~~~
- None
.. _neuron-torch-106720:
[1.0.672.0]
--------------------------------------------------
Date: 1/27/2020
.. _summary-8:
Summary
~~~~~~~~
.. _major-new-features-5:
Major new features
~~~~~~~~~~~~~~~~~~
.. _resolved-issues-5:
Resolved issues
~~~~~~~~~~~~~~~~
- Python 3.5 and Python 3.7 are now supported.
.. _known-issues-and-limitations-3:
Known issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Other Notes
~~~~~~~~~~~
.. _neuron-torch-106270:
[1.0.627.0]
--------------------------------------------------
Date: 12/20/2019
.. _summary-9:
Summary
~~~~~~~~
This is the initial release of torch-neuron. It is not distributed on
the DLAMI yet and needs to be installed from the neuron pip repository.
Note that we are currently using a TensorFlow as an intermediate format
to pass to our compiler. This does not affect any runtime execution from
PyTorch to Neuron Runtime and Inferentia. This is why the neuron-cc
installation must include [tensorflow] for PyTorch.
.. _major-new-features-6:
Major new features
~~~~~~~~~~~~~~~~~~
.. _resolved-issues-6:
Resolved issues
~~~~~~~~~~~~~~~
.. _known-issues-and-limitations-4:
Known issues and limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Models TESTED
~~~~~~~~~~~~~~
The following models have successfully run on neuron-inferentia systems
1. SqueezeNet
2. ResNet50
3. Wide ResNet50
Pytorch Serving
~~~~~~~~~~~~~~~
In this initial version there is no specific serving support. Inference
works correctly through Python on Inf1 instances using the neuron
runtime. Future releases will include support for production deployment
and serving of models
Profiler support
~~~~~~~~~~~~~~~~
Profiler support is not provided in this initial release and will be
available in future releases
Automated partitioning
~~~~~~~~~~~~~~~~~~~~~~
Automatic partitioning of graphs into supported and non-supported
operations is not currently supported. A tutorial is available to
provide guidance on how to manually parition a model graph. Please see
:ref:`pytorch-manual-partitioning-jn-tutorial`
PyTorch dependency
~~~~~~~~~~~~~~~~~~
Currently PyTorch support depends on a Neuron specific version of
PyTorch v1.3.1. Future revisions will add support for 1.4 and future
releases.
Trace behavior
~~~~~~~~~~~~~~
In order to trace a model it must be in evaluation mode. For examples
please see :ref:`/src/examples/pytorch/resnet50.ipynb`
Six pip package is required
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Six package is required for the torch-neuron runtime, but it is not
modeled in the package dependencies. This will be fixed in a future
release.
Multiple NeuronCore support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the num-neuroncores options is used the number of cores must be
manually set in the calling shell environment variable for compilation
and inference.
For example: Using the keyword argument
compiler_args=['—num-neuroncores', '4'] in the trace call, requires
NEURONCORE_GROUP_SIZES=4 to be set in the environment at compile time
and runtime
CPU execution
~~~~~~~~~~~~~~
At compilation time a constant output is generated for the purposes of
tracing. Running inference on a non neuron instance will generate
incorrect results. This must not be used. The following error message is
generated to stderr:
::
Warning: Tensor output are ** NOT CALCULATED ** during CPU execution and only
indicate tensor shape
.. _other-notes-1:
Other notes
~~~~~~~~~~~
- Python version(s) supported:
- 3.6
- Linux distribution supported:
- DLAMI Ubuntu 18 and Amazon Linux 2 (using Python 3.6 Conda environments)
- Other AMIs based on Ubuntu 18
- For Amazon Linux 2 please install Conda and use Python 3.6 Conda
environment
</pre></body></html>
|
2023-09-29T20:54:47.805Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/mlp.rst.txt
|
```
.. _neuronx-mlp-training-tutorial:
Multi-Layer Perceptron Training Tutorial
========================================
MNIST is a standard dataset for handwritten digit recognition. A
multi-layer perceptron (MLP) model can be trained with MNIST dataset to
recognize hand-written digits. This tutorial starts with a 3-layer MLP
training example in PyTorch on CPU, then show how to modify it to run on
Trainium using PyTorch Neuron. It also shows how to do multiple worker
data parallel MLP training.
.. contents:: Table of Contents
:local:
:depth: 2
.. include:: ../note-performance.txt
Setup environment and download examples
---------------------------------------
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on
Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you also want to run through the BERT pretraining and GPT pretraining tutorials.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
Install needed dependencies in your environment by running:
.. code:: bash
pip install pillow
Torchvision package is needed for MNIST dataset and has already been installed as part of :ref:`Install PyTorch Neuron on Trn1 <pytorch-neuronx-install>`. Installing Torchvision together with torch-neuronx ensures that the compatible version of Torchvision is selected. For example, torchvision==0.12 is compatible with torch==1.11 and torchvision==0.13 is compatible with torch==1.12.
To download the MNIST MLP examples, do:
.. code:: bash
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/training/mnist_mlp
Multi-layer perceptron MNIST model
----------------------------------
In ``model.py``, we define the multi-layer perceptron (MLP) MNIST model with 3
linear layers and ReLU activations, followed by a log-softmax layer.
This model will be used in multiple example scripts.
Single-worker MLP training script in PyTorch on CPU
---------------------------------------------------
We will show how to modify a training script that runs on other platform to run on Trainium.
We begin with a single-worker MLP training script for running on
the host CPUs of the Trainium instance. The training script imports the
MLP model from ``model.py``.
In this training script, we load the MNIST train dataset and, within the
``main()`` method, set the data loader to read batches of 32 training
examples and corresponding labels.
Next we instantiate the MLP model and move it to the device. We use
``device = 'cpu'`` to illustrate the use of device in PyTorch. On GPU
you would use ``device = 'cuda'`` instead.
We also instantiate the other two components of a neural network
trainer: stochastic-gradient-descent (SGD) optimizer and
negative-log-likelihood (NLL) loss function (also known as cross-entropy
loss).
After the optimizer and loss function, we create a training loop to iterate over the training samples and
labels, performing the following steps for each batch in each iteration:
- Zero gradients using:
.. code:: python
optimizer.zero_grad()
- Move training samples and labels to device using the 'tensor.to'
method.
- Perform forward/prediction pass using
.. code:: python
output = model(train_x)
- The prediction results are compared against the corresponding labels
using the loss function to compute the loss
.. code:: python
loss_fn(output, train_label)
- The loss is propagated back through the model using chain-rule to
compute the weight gradients
.. code:: python
loss.backward()
- The weights are updated with a change that is proportional to the
computed weights gradients
.. code:: python
optimizer.step()
At the end of training we compute the throughput, display the final loss
and save the checkpoint.
Expected CPU output:
.. code:: bash
----------Training ---------------
Train throughput (iter/sec): 286.96994718801335
Final loss is 0.1040
----------End Training ---------------
For a full tutorial on training in PyTorch, please see
https://pytorch.org/tutorials/beginner/introyt/trainingyt.html.
Thus far we have used PyTorch without Trainium. Next, we will show how
to change this script to run on Trainium.
Single-worker MLP training on Trainium
--------------------------------------
To run on Trainium, first we modify the CPU training script train_cpu.py to run with
PyTorch Neuron torch_xla as described in :ref:`PyTorch Neuron for Trainium Getting Started Guide <pytorch-neuronx-programming-guide>`
by changing the device:
.. code:: python
import torch_xla.core.xla_model as xm
device = xm.xla_device()
# or
device = 'xla'
When the model is moved to the XLA device using ``model.to(device)``
method, subsequent operations on the model are recorded for later
execution. This is XLA's lazy execution which is different from
PyTorch's eager execution. Within the training loop, we must mark the
graph to be optimized and run on XLA device (NeuronCore) using
xm.mark_step() (unless MpDeviceLoader is used as you will see in the next section).
Without this mark, XLA cannot determine where the graph
ends. The collected computational graph also gets compiled and executed
when you request the value of a tensor such as by calling
``loss.item()`` or ``print(loss)``.
To save a checkpoint, it is recommended to use the ``xm.save()``
function instead of ``torch.save()`` to ensure states are moved to CPU.
``xm.save()`` also prevents the "XRT memory handle not found" warning at
the end of evaluation script (if the checkpoint saved using torch.save()
is used for evaluation).
The resulting script ``train.py`` can be executed as
``python3 train.py``. Again, note that we import the MLP model
from ``model.py``. When you examine the script, the comments that begin with
'XLA' indicate the changes required to make the script compatible with
torch_xla.
Expected output on trn1.32xlarge (start from a fresh compilation cache, located at /var/tmp/neuron-compile-cache by default):
.. code:: bash
2022-04-12 16:15:00.000947: INFO ||NCC_WRAPPER||: No candidate found under /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_18200615679846498221.
2022-04-12 16:15:00.000949: INFO ||NCC_WRAPPER||: Cache dir for the neff: /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_18200615679846498221/MODULE_0_SyncTensorsGraph.318_18200615679846498221_ip-172-31-69-14.ec2.internal-8355221-28940-5dc775cd78aa2/83a0fd4a-b07e-4404-aa55-701ab3b2700c
........
Compiler status PASS
2022-04-12 16:18:05.000843: INFO ||NCC_WRAPPER||: Exiting with a successfully compiled graph
2022-04-12 16:18:05.000957: INFO ||NCC_WRAPPER||: No candidate found under /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_5000680699473283909.
2022-04-12 16:18:05.000960: INFO ||NCC_WRAPPER||: Cache dir for the neff: /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_5000680699473283909/MODULE_1_SyncTensorsGraph.390_5000680699473283909_ip-172-31-69-14.ec2.internal-8355221-28940-5dc7767e5fc69/7d0a2955-11b4-42e6-b536-6f0f02cc68df
.
Compiler status PASS
2022-04-12 16:18:12.000912: INFO ||NCC_WRAPPER||: Exiting with a successfully compiled graph
----------Training ---------------
Train throughput (iter/sec): 95.06756661972014
Final loss is 0.1979
----------End Training ---------------
If you re-run the training script a second time, you will see messages
indicating that the compiled graphs are cached in the persistent cache
from the previous run and that the startup time is quicker:
.. code:: bash
(aws_neuron_venv_pytorch_p36) [ec2-user@ip-172-31-69-14 mnist_mlp]$ python train.py |& tee log_trainium
2022-04-12 16:21:58.000241: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_18200615679846498221/MODULE_0_SyncTensorsGraph.318_18200615679846498221_ip-172-31-69-14.ec2.internal-8355221-28940-5dc775cd78aa2/83a0fd4a-b07e-4404-aa55-701ab3b2700c/MODULE_0_SyncTensorsGraph.318_18200615679846498221_ip-172-31-69-14.ec2.internal-8355221-28940-5dc775cd78aa2.neff. Exiting with a successfully compiled graph
2022-04-12 16:21:58.000342: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_5000680699473283909/MODULE_1_SyncTensorsGraph.390_5000680699473283909_ip-172-31-69-14.ec2.internal-8355221-28940-5dc7767e5fc69/7d0a2955-11b4-42e6-b536-6f0f02cc68df/MODULE_1_SyncTensorsGraph.390_5000680699473283909_ip-172-31-69-14.ec2.internal-8355221-28940-5dc7767e5fc69.neff. Exiting with a successfully compiled graph
----------Training ---------------
Train throughput (iter/sec): 93.16748895384832
Final loss is 0.1979
----------End Training ---------------
Multiple graphs can be created during execution since there are
differences between some iterations (first, steady state, last). After
the first iteration, the graph for each iteration should remain the same
from iteration to iteration. This allows XLA runtime to execute a
previous compiled graph that has been cached in XLA runtime cache.
If the inner training loop has some control-flows, for example for
gradient accumulation, the number of compiled graphs may increase due to the
generation and consumption of intermediates as well as additional
operations when the conditional path is taken.
Multi-worker data-parallel MLP training using torchrun
------------------------------------------------------
Data parallel training allows you to replicate your script across
multiple workers, each worker processing a proportional portion of the
dataset, in order to train faster.
The PyTorch distributed utility torchrun can be used to launch multiple
processes in a server node for multi-worker data parallel training.
To run multiple workers in data parallel configuration using torchrun,
modify the single-worker training script train.py as follows (below we use ``xm``
as alias for ``torch_xla.core.xla_model`` and ``xmp`` as alias for
``torch_xla.distributed.xla_multiprocessing``):
1. Import XLA backend for torch.distributed using ``import torch_xla.distributed.xla_backend``.
2. Use ``torch.distributed.init_process_group('xla')``
to initialize PyTorch XLA runtime and Neuron
runtime.
3. Use XLA multiprocessing device loader (``MpDeviceLoader``) from
``torch_xla.distributed`` to wrap PyTorch data loader.
4. Use ``xm.optimizer_step(optimizer)`` to perform allreduce and take
optimizer step.
XLA MpDeviceLoader is optimized for XLA and is recommended for best
performance. It also takes care of marking the step for execution
(compile and execute the lazily collected operations for an iteration)
so no separate ``xm.mark_step()`` is needed.
The following are general best-practice changes needed to scale up the
training:
1. Set the random seed to be the same across workers.
2. Scale up the learning rate by the number of workers. Use
``xm.xrt_world_size()`` to get the global number of workers.
3. Add distributed sampler to allow different worker to sample different
portions of dataset.
Also, the ``xm.save()`` function used to save checkpoint automatically
saves only for the rank-0 worker's parameters.
The resulting script is ``train_torchrun.py``
(note again that we import the MLP model from ``model.py``):
Next we use the ``torchrun`` utility that is included with torch
installation to run multiple processes, each using one NeuronCore. Use
the option ``nproc_per_node`` to indicate the number of processes to launch.
For example, to run on two NeuronCores on one Trn1 instance only, do:
.. code:: bash
torchrun --nproc_per_node=2 train_torchrun.py
NOTE: Currently we only support 1 and 2 worker configurations on trn1.2xlarge and 1, 2, 8, and 32-worker configurations on trn1.32xlarge.
Expected output on trn1.32xlarge (second run to avoid compilations):
.. code:: bash
----------Training ---------------
----------Training ---------------
... (Info messages truncated)
Train throughput (iter/sec): 163.25353269069706
Train throughput (iter/sec): 163.23261047441036
Final loss is 0.3469
Final loss is 0.1129
----------End Training ---------------
----------End Training ---------------
In another example, we run on two trn1.32xlarge instances launched with EFA-enabled interfaces, using `EFA-enabled security group <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa-start-nccl-base.html#nccl-start-base-setup>`__, and setup using :ref:`Install PyTorch Neuron on Trn1 <pytorch-neuronx-install>`.
NOTE: To run on multiple instances, you will need to use trn1.32xlarge instances and using all 32 NeuronCores on each instance.
On the rank-0 Trn1 host (root), run with ``--node_rank=0`` using torchrun utility, and ``--master_addr`` set to rank-0 host's IP address:
.. code:: shell
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=0 --master_addr=<root IP> --master_port=2020 train_torchrun.py
On another Trn1 host, run with ``--node_rank=1``, and ``--master_addr`` also set to rank-0 host's IP address:
.. code:: shell
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=1 --master_addr=<root IP> --master_port=2020 train_torchrun.py
It is important to launch rank-0 worker with ``--node_rank=0`` to avoid hang.
To train on multiple instances, it is recommended to use a ParallelCluster. For a ParallelCluster example, please see `Train a model on AWS Trn1 ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`__.
Single-worker MLP evaluation on Trainium
----------------------------------------
After training, the final checkpoint is saved in ``checkpoints`` directory. You can run the evaluation step by running the ``eval.py`` script in the same directory as the training script:
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/mnist_mlp
python eval.py
This evaluation phase can be merged with the training script to check accuracy, for example at the end of every epoch. It is kept separate for illustration purpose.
The evaluation script follow similar flow as the training script with the following differences:
- The input data used is the validation subset of the MNIST dataset.
- Only need to loop through the dataset once (no epochs).
- There's only forward pass through the model, and no backward pass or optimizer update.
- Compute the accuracy across validation set instead of loss per batch.
Expected results (after a second execution to eliminate warmup compilation time during first execution):
.. code:: bash
----------Evaluating---------------
Test throughput (iter/sec): 47.897945949832845
Accuracy: 0.9273833632469177
----------Done Evaluating---------------
If you get a lower accuracy than above, please check that the training is done with at least 4 epochs.
You can also use :ref:`torch_neuronx_trace_api` in the evaluation loop. This can be achieved by the following changes to the ``eval.py``:
- Use ``device = 'cpu'`` instead of XLA device.
- Don't use ``mark_step()``.
- Trace the model at the first iteration to freeze it and precompile for inference:
.. code:: python
if idx == 0:
import torch_neuronx
model = torch_neuronx.trace(model, test_x)
However, note that the inference trace API fixed the input tensor shape, so that every input tensor will need to match the size used during the tracing step. To ensure every batch from ``DataLoader`` has the same tensor shape, pass ``drop_last=True`` option when instantiating ``DataLoader``.
.. code:: python
test_loader = DataLoader(test_dataset, batch_size=32, drop_last=True)
The script ``eval_using_trace.py`` can be compared against ``eval.py`` to show the above modifications. It can be executed using:
.. code:: bash
python eval_using_trace.py
Expected results (note the large increase in performance when using trace API for inference):
.. code:: bash
----------Evaluating---------------
Test throughput (iter/sec): 409.0836291417652
Accuracy: 0.9288585186004639
----------Done Evaluating---------------
Known issues and limitations
----------------------------
MLP model is not optimized for performance. For the single-worker training, the performance can be improved by using MpDeviceLoader which exists in the multiprocessing example. For example, by setting ``--nproc_per_node=1`` in the torchrun example, you will see higher MLP performance.
.. code:: bash
(aws_neuron_venv_pytorch_p36) [ec2-user@ip-172-31-69-14 mnist_mlp]$ torchrun --nproc_per_node=1 train_torchrun.py
----------Training ---------------
... (Info messages truncated)
Train throughput (iter/sec): 192.43508922834008
Final loss is 0.2720
----------End Training ---------------
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-mlp-training-tutorial:
Multi-Layer Perceptron Training Tutorial
========================================
MNIST is a standard dataset for handwritten digit recognition. A
multi-layer perceptron (MLP) model can be trained with MNIST dataset to
recognize hand-written digits. This tutorial starts with a 3-layer MLP
training example in PyTorch on CPU, then show how to modify it to run on
Trainium using PyTorch Neuron. It also shows how to do multiple worker
data parallel MLP training.
.. contents:: Table of Contents
:local:
:depth: 2
.. include:: ../note-performance.txt
Setup environment and download examples
---------------------------------------
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on
Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you also want to run through the BERT pretraining and GPT pretraining tutorials.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
Install needed dependencies in your environment by running:
.. code:: bash
pip install pillow
Torchvision package is needed for MNIST dataset and has already been installed as part of :ref:`Install PyTorch Neuron on Trn1 <pytorch-neuronx-install>`. Installing Torchvision together with torch-neuronx ensures that the compatible version of Torchvision is selected. For example, torchvision==0.12 is compatible with torch==1.11 and torchvision==0.13 is compatible with torch==1.12.
To download the MNIST MLP examples, do:
.. code:: bash
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/training/mnist_mlp
Multi-layer perceptron MNIST model
----------------------------------
In ``model.py``, we define the multi-layer perceptron (MLP) MNIST model with 3
linear layers and ReLU activations, followed by a log-softmax layer.
This model will be used in multiple example scripts.
Single-worker MLP training script in PyTorch on CPU
---------------------------------------------------
We will show how to modify a training script that runs on other platform to run on Trainium.
We begin with a single-worker MLP training script for running on
the host CPUs of the Trainium instance. The training script imports the
MLP model from ``model.py``.
In this training script, we load the MNIST train dataset and, within the
``main()`` method, set the data loader to read batches of 32 training
examples and corresponding labels.
Next we instantiate the MLP model and move it to the device. We use
``device = 'cpu'`` to illustrate the use of device in PyTorch. On GPU
you would use ``device = 'cuda'`` instead.
We also instantiate the other two components of a neural network
trainer: stochastic-gradient-descent (SGD) optimizer and
negative-log-likelihood (NLL) loss function (also known as cross-entropy
loss).
After the optimizer and loss function, we create a training loop to iterate over the training samples and
labels, performing the following steps for each batch in each iteration:
- Zero gradients using:
.. code:: python
optimizer.zero_grad()
- Move training samples and labels to device using the 'tensor.to'
method.
- Perform forward/prediction pass using
.. code:: python
output = model(train_x)
- The prediction results are compared against the corresponding labels
using the loss function to compute the loss
.. code:: python
loss_fn(output, train_label)
- The loss is propagated back through the model using chain-rule to
compute the weight gradients
.. code:: python
loss.backward()
- The weights are updated with a change that is proportional to the
computed weights gradients
.. code:: python
optimizer.step()
At the end of training we compute the throughput, display the final loss
and save the checkpoint.
Expected CPU output:
.. code:: bash
----------Training ---------------
Train throughput (iter/sec): 286.96994718801335
Final loss is 0.1040
----------End Training ---------------
For a full tutorial on training in PyTorch, please see
https://pytorch.org/tutorials/beginner/introyt/trainingyt.html.
Thus far we have used PyTorch without Trainium. Next, we will show how
to change this script to run on Trainium.
Single-worker MLP training on Trainium
--------------------------------------
To run on Trainium, first we modify the CPU training script train_cpu.py to run with
PyTorch Neuron torch_xla as described in :ref:`PyTorch Neuron for Trainium Getting Started Guide <pytorch-neuronx-programming-guide>`
by changing the device:
.. code:: python
import torch_xla.core.xla_model as xm
device = xm.xla_device()
# or
device = 'xla'
When the model is moved to the XLA device using ``model.to(device)``
method, subsequent operations on the model are recorded for later
execution. This is XLA's lazy execution which is different from
PyTorch's eager execution. Within the training loop, we must mark the
graph to be optimized and run on XLA device (NeuronCore) using
xm.mark_step() (unless MpDeviceLoader is used as you will see in the next section).
Without this mark, XLA cannot determine where the graph
ends. The collected computational graph also gets compiled and executed
when you request the value of a tensor such as by calling
``loss.item()`` or ``print(loss)``.
To save a checkpoint, it is recommended to use the ``xm.save()``
function instead of ``torch.save()`` to ensure states are moved to CPU.
``xm.save()`` also prevents the "XRT memory handle not found" warning at
the end of evaluation script (if the checkpoint saved using torch.save()
is used for evaluation).
The resulting script ``train.py`` can be executed as
``python3 train.py``. Again, note that we import the MLP model
from ``model.py``. When you examine the script, the comments that begin with
'XLA' indicate the changes required to make the script compatible with
torch_xla.
Expected output on trn1.32xlarge (start from a fresh compilation cache, located at /var/tmp/neuron-compile-cache by default):
.. code:: bash
2022-04-12 16:15:00.000947: INFO ||NCC_WRAPPER||: No candidate found under /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_18200615679846498221.
2022-04-12 16:15:00.000949: INFO ||NCC_WRAPPER||: Cache dir for the neff: /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_18200615679846498221/MODULE_0_SyncTensorsGraph.318_18200615679846498221_ip-172-31-69-14.ec2.internal-8355221-28940-5dc775cd78aa2/83a0fd4a-b07e-4404-aa55-701ab3b2700c
........
Compiler status PASS
2022-04-12 16:18:05.000843: INFO ||NCC_WRAPPER||: Exiting with a successfully compiled graph
2022-04-12 16:18:05.000957: INFO ||NCC_WRAPPER||: No candidate found under /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_5000680699473283909.
2022-04-12 16:18:05.000960: INFO ||NCC_WRAPPER||: Cache dir for the neff: /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_5000680699473283909/MODULE_1_SyncTensorsGraph.390_5000680699473283909_ip-172-31-69-14.ec2.internal-8355221-28940-5dc7767e5fc69/7d0a2955-11b4-42e6-b536-6f0f02cc68df
.
Compiler status PASS
2022-04-12 16:18:12.000912: INFO ||NCC_WRAPPER||: Exiting with a successfully compiled graph
----------Training ---------------
Train throughput (iter/sec): 95.06756661972014
Final loss is 0.1979
----------End Training ---------------
If you re-run the training script a second time, you will see messages
indicating that the compiled graphs are cached in the persistent cache
from the previous run and that the startup time is quicker:
.. code:: bash
(aws_neuron_venv_pytorch_p36) [ec2-user@ip-172-31-69-14 mnist_mlp]$ python train.py |& tee log_trainium
2022-04-12 16:21:58.000241: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_18200615679846498221/MODULE_0_SyncTensorsGraph.318_18200615679846498221_ip-172-31-69-14.ec2.internal-8355221-28940-5dc775cd78aa2/83a0fd4a-b07e-4404-aa55-701ab3b2700c/MODULE_0_SyncTensorsGraph.318_18200615679846498221_ip-172-31-69-14.ec2.internal-8355221-28940-5dc775cd78aa2.neff. Exiting with a successfully compiled graph
2022-04-12 16:21:58.000342: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/USER_neuroncc-1.0.47218.0+162039557/MODULE_5000680699473283909/MODULE_1_SyncTensorsGraph.390_5000680699473283909_ip-172-31-69-14.ec2.internal-8355221-28940-5dc7767e5fc69/7d0a2955-11b4-42e6-b536-6f0f02cc68df/MODULE_1_SyncTensorsGraph.390_5000680699473283909_ip-172-31-69-14.ec2.internal-8355221-28940-5dc7767e5fc69.neff. Exiting with a successfully compiled graph
----------Training ---------------
Train throughput (iter/sec): 93.16748895384832
Final loss is 0.1979
----------End Training ---------------
Multiple graphs can be created during execution since there are
differences between some iterations (first, steady state, last). After
the first iteration, the graph for each iteration should remain the same
from iteration to iteration. This allows XLA runtime to execute a
previous compiled graph that has been cached in XLA runtime cache.
If the inner training loop has some control-flows, for example for
gradient accumulation, the number of compiled graphs may increase due to the
generation and consumption of intermediates as well as additional
operations when the conditional path is taken.
Multi-worker data-parallel MLP training using torchrun
------------------------------------------------------
Data parallel training allows you to replicate your script across
multiple workers, each worker processing a proportional portion of the
dataset, in order to train faster.
The PyTorch distributed utility torchrun can be used to launch multiple
processes in a server node for multi-worker data parallel training.
To run multiple workers in data parallel configuration using torchrun,
modify the single-worker training script train.py as follows (below we use ``xm``
as alias for ``torch_xla.core.xla_model`` and ``xmp`` as alias for
``torch_xla.distributed.xla_multiprocessing``):
1. Import XLA backend for torch.distributed using ``import torch_xla.distributed.xla_backend``.
2. Use ``torch.distributed.init_process_group('xla')``
to initialize PyTorch XLA runtime and Neuron
runtime.
3. Use XLA multiprocessing device loader (``MpDeviceLoader``) from
``torch_xla.distributed`` to wrap PyTorch data loader.
4. Use ``xm.optimizer_step(optimizer)`` to perform allreduce and take
optimizer step.
XLA MpDeviceLoader is optimized for XLA and is recommended for best
performance. It also takes care of marking the step for execution
(compile and execute the lazily collected operations for an iteration)
so no separate ``xm.mark_step()`` is needed.
The following are general best-practice changes needed to scale up the
training:
1. Set the random seed to be the same across workers.
2. Scale up the learning rate by the number of workers. Use
``xm.xrt_world_size()`` to get the global number of workers.
3. Add distributed sampler to allow different worker to sample different
portions of dataset.
Also, the ``xm.save()`` function used to save checkpoint automatically
saves only for the rank-0 worker's parameters.
The resulting script is ``train_torchrun.py``
(note again that we import the MLP model from ``model.py``):
Next we use the ``torchrun`` utility that is included with torch
installation to run multiple processes, each using one NeuronCore. Use
the option ``nproc_per_node`` to indicate the number of processes to launch.
For example, to run on two NeuronCores on one Trn1 instance only, do:
.. code:: bash
torchrun --nproc_per_node=2 train_torchrun.py
NOTE: Currently we only support 1 and 2 worker configurations on trn1.2xlarge and 1, 2, 8, and 32-worker configurations on trn1.32xlarge.
Expected output on trn1.32xlarge (second run to avoid compilations):
.. code:: bash
----------Training ---------------
----------Training ---------------
... (Info messages truncated)
Train throughput (iter/sec): 163.25353269069706
Train throughput (iter/sec): 163.23261047441036
Final loss is 0.3469
Final loss is 0.1129
----------End Training ---------------
----------End Training ---------------
In another example, we run on two trn1.32xlarge instances launched with EFA-enabled interfaces, using `EFA-enabled security group <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa-start-nccl-base.html#nccl-start-base-setup>`__, and setup using :ref:`Install PyTorch Neuron on Trn1 <pytorch-neuronx-install>`.
NOTE: To run on multiple instances, you will need to use trn1.32xlarge instances and using all 32 NeuronCores on each instance.
On the rank-0 Trn1 host (root), run with ``--node_rank=0`` using torchrun utility, and ``--master_addr`` set to rank-0 host's IP address:
.. code:: shell
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=0 --master_addr=<root IP> --master_port=2020 train_torchrun.py
On another Trn1 host, run with ``--node_rank=1``, and ``--master_addr`` also set to rank-0 host's IP address:
.. code:: shell
export FI_EFA_USE_DEVICE_RDMA=1
export FI_PROVIDER=efa
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=1 --master_addr=<root IP> --master_port=2020 train_torchrun.py
It is important to launch rank-0 worker with ``--node_rank=0`` to avoid hang.
To train on multiple instances, it is recommended to use a ParallelCluster. For a ParallelCluster example, please see `Train a model on AWS Trn1 ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>`__.
Single-worker MLP evaluation on Trainium
----------------------------------------
After training, the final checkpoint is saved in ``checkpoints`` directory. You can run the evaluation step by running the ``eval.py`` script in the same directory as the training script:
.. code:: bash
cd ~/aws-neuron-samples/torch-neuronx/training/mnist_mlp
python eval.py
This evaluation phase can be merged with the training script to check accuracy, for example at the end of every epoch. It is kept separate for illustration purpose.
The evaluation script follow similar flow as the training script with the following differences:
- The input data used is the validation subset of the MNIST dataset.
- Only need to loop through the dataset once (no epochs).
- There's only forward pass through the model, and no backward pass or optimizer update.
- Compute the accuracy across validation set instead of loss per batch.
Expected results (after a second execution to eliminate warmup compilation time during first execution):
.. code:: bash
----------Evaluating---------------
Test throughput (iter/sec): 47.897945949832845
Accuracy: 0.9273833632469177
----------Done Evaluating---------------
If you get a lower accuracy than above, please check that the training is done with at least 4 epochs.
You can also use :ref:`torch_neuronx_trace_api` in the evaluation loop. This can be achieved by the following changes to the ``eval.py``:
- Use ``device = 'cpu'`` instead of XLA device.
- Don't use ``mark_step()``.
- Trace the model at the first iteration to freeze it and precompile for inference:
.. code:: python
if idx == 0:
import torch_neuronx
model = torch_neuronx.trace(model, test_x)
However, note that the inference trace API fixed the input tensor shape, so that every input tensor will need to match the size used during the tracing step. To ensure every batch from ``DataLoader`` has the same tensor shape, pass ``drop_last=True`` option when instantiating ``DataLoader``.
.. code:: python
test_loader = DataLoader(test_dataset, batch_size=32, drop_last=True)
The script ``eval_using_trace.py`` can be compared against ``eval.py`` to show the above modifications. It can be executed using:
.. code:: bash
python eval_using_trace.py
Expected results (note the large increase in performance when using trace API for inference):
.. code:: bash
----------Evaluating---------------
Test throughput (iter/sec): 409.0836291417652
Accuracy: 0.9288585186004639
----------Done Evaluating---------------
Known issues and limitations
----------------------------
MLP model is not optimized for performance. For the single-worker training, the performance can be improved by using MpDeviceLoader which exists in the multiprocessing example. For example, by setting ``--nproc_per_node=1`` in the torchrun example, you will see higher MLP performance.
.. code:: bash
(aws_neuron_venv_pytorch_p36) [ec2-user@ip-172-31-69-14 mnist_mlp]$ torchrun --nproc_per_node=1 train_torchrun.py
----------Training ---------------
... (Info messages truncated)
Train throughput (iter/sec): 192.43508922834008
Final loss is 0.2720
----------End Training ---------------
</pre></body></html>
|
2023-09-29T20:54:47.854Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.rst.txt
|
```
.. _torch-hf-t5-finetune:
Fine-tune T5 model on Trn1
================================
In this tutorial, we show how to fine-tune a Hugging Face (HF) T5 model
using HF trainer API. This example fine-tunes a `T5 model for
a text-summarization <https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization>`__ task on CNN/DailyMail dataset.
.. contents:: Table of Contents
:local:
:depth: 2
.. include:: ../note-performance.txt
Setup and compilation
---------------------
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you also want to run through the BERT pretraining and GPT pretraining tutorials.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
First we install a recent version of HF transformers, scikit-learn and evaluate packages in our environment as well as download the source matching the installed version. In this example, we chose version 4.26.0 and the text summarization example from HF transformers source:
.. code:: bash
export HF_VER=4.26.0
pip install -U transformers==$HF_VER datasets evaluate scikit-learn rouge_score pandas==1.4.0
cd ~/
git clone https://github.com/huggingface/transformers --branch v$HF_VER
cd ~/transformers/examples/pytorch/summarization
Single-worker training
----------------------
We will run text-summarization fine-tuning task following the example in
README.md located in the path
`~/transformers/examples/pytorch/summarization.`
We use full BF16 casting using `XLA_USE_BF16=1` to enable best
performance. First, paste the following script into your terminal to
create a “run.sh” file and change it to executable:
.. code:: ipython3
tee run.sh > /dev/null <<EOF
#!/bin/bash
if [ \$NEURON_PARALLEL_COMPILE == "1" ]
then
XLA_USE_BF16=1 python3 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--max_steps 100 \
--max_eval_samples 100 \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
else
XLA_USE_BF16=1 python3 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
fi
EOF
chmod +x run.sh
We optionally precompile the model and training script using
`neuron\_parallel\_compile <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html?highlight=neuron_parallel_compile>`__ to warm up the persistent graph cache (Neuron
Cache) such that the actual run has fewer compilations (faster run
time):
.. code:: ipython3
neuron_parallel_compile ./run.sh
Note: For these auto-regressive models, do not run the
``predict_with_generate`` method when doing the precompile step. This is
because the ``neuron_parallel_compile`` utility will run the training
script in graph extraction mode and no actual execution of the graph
will be done. Hence, the outputs at each step are invalid. Since the
auto-regressive generation at each step is dependent on output of
previous step, the generate step would fail since the outputs from
previous steps are invalid.
Precompilation is optional and only needs to be done once unless
hyperparameters such as batch size are modified. After the optional
precompilation, the actual run will be faster with minimal additional
compilations.
.. code:: ipython3
./run.sh
If precompilation was not done, the first execution of ./run.sh will be
slower due to serial compilations. Rerunning the same script a second
time would show quicker execution as the compiled graphs will be already
cached in persistent cache.
Running the above script will run the T5-small fine-tuning on a single
process.
**Note:** As you may have noticed, we are not running the
``predict_with_generate`` as part of training. This is because,
``predict_with_generate`` requires auto-regressive sampling where the
inputs to the decoder are created by appending outputs of previous
steps. This causes the inputs to the decoder to change shape and thereby
resulting in a new graph. In other words, the current ``generate`` api
provided by HF transformers leads to repeated compilations. We are working on
building a Neuron friendly version of ``generate`` api and it will be
made available as part of future release. This will enable us to run
``predict_with_generate`` as part of training script.
As a workaround, we can run the ``predict_with_generate`` on CPU after
the model is trained. Once training is completed, a trained checkpoint
would be saved. We can load the trained model and run the
``predict_with_generate`` to compute the final accuracy.
To do so, in run_summarization.py, add the following before ``transformers`` get imported.
This can be done by adding the below lines before all the ``imports``:
.. code:: ipython3
import libneuronxla
# Disable configuring xla env
def _configure_env():
pass
libneuronxla.configure_environment = _configure_env
You can now run the following and it should run the predict method on CPU device.
.. code:: ipython3
NEURON_NUM_DEVICES=0 python3 ./run_summarization.py \
--model_name_or_path <CHECKPOINT_DIR> \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_predict \
--predict_with_generate \
--source_prefix "summarize: " \
--per_device_eval_batch_size 4 \
--max_source_length 512 \
--pad_to_max_length \
--no_cuda \
--output_dir /tmp/tst-summarization |& tee log_run
Note: To run on CPU, we need to make sure that NEURON\_NUM\_DEVICES is
set to 0. This will make sure no xla\_devices are created and the
trainer would use the default device (CPU).
.. _multi_worker_training:
Multi-worker Training
---------------------
The above script will run one worker on one NeuronCore. To run on
multiple cores, first add these lines to top of run\_summarization.py to disable
Distributed Data Parallel (DDP) when using torchrun (see Known issues
and limitations section below):
.. code:: ipython3
# Disable DDP for torchrun
from transformers import __version__, Trainer
Trainer._wrap_model = lambda self, model, training=True, dataloader=None: model
Then launch the run\_summarization.py script with torchrun using
--nproc\_per\_node=N option to specify the number of workers (N=2 for
trn1.2xlarge, and N=2, 8, or 32 for trn1.32xlarge). The following
example runs 2 workers. Paste the following script into your terminal to
create a “run\_2w.sh” file and change it to executable:
.. code:: ipython3
tee run_2w.sh > /dev/null <<EOF
#!/bin/bash
if [ \$NEURON_PARALLEL_COMPILE == "1" ]
then
XLA_USE_BF16=1 torchrun --nproc_per_node=2 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--max_steps 100 \
--max_eval_samples 100 \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
else
XLA_USE_BF16=1 torchrun --nproc_per_node=2 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
fi
EOF
chmod +x run_2w.sh
Again, we optionally precompile the model and training script using
neuron\_parallel\_compile to warm up the persistent graph cache (Neuron
Cache), ignoring the results from this precompile run as it is only for
extracting and compiling the XLA graphs:
.. code:: ipython3
neuron_parallel_compile ./run_2w.sh
Precompilation is optional and only needs to be done once unless
hyperparameters such as batch size are modified. After the optional
precompilation, the actual run will be faster with minimal additional
compilations.
.. code:: ipython3
./run_2w.sh
During run, you will notice that the “Total train batch size” is now
8 and the “Total optimization steps” is now half the number for one
worker training. Also, if you open ``neuron-top`` in a separate terminal,
you should see 2 cores been utilized.
To train T5-large model, you can set the ``model_name_or_path`` argument to ``t5-large``.
Please note, currently running ``t5-large`` on trn1-2xl machine can result in ``HOST OOM`` during
compilation. Hence, it is recommended that you run a ``t5-large`` model training on a trn1-32xl machine.
On a trn1-32xl machine, you can create a run_32w.sh on the terminal using the following commands:
.. code:: ipython3
tee run_32w.sh > /dev/null <<EOF
#!/bin/bash
if [ \$NEURON_PARALLEL_COMPILE == "1" ]
then
XLA_USE_BF16=1 torchrun --nproc_per_node=32 ./run_summarization.py \
--model_name_or_path t5-large \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--max_steps 100 \
--max_eval_samples 100 \
--gradient_accumulation_steps=11 \
--output_dir /tmp/tst-summarization |& tee log_run
else
XLA_USE_BF16=1 torchrun --nproc_per_node=32 ./run_summarization.py \
--model_name_or_path t5-large \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--gradient_accumulation_steps=11 \
--output_dir /tmp/tst-summarization |& tee log_run
fi
EOF
chmod +x run_32w.sh
You can now follow the same steps as listed above. This script would run a t5-large model by launching a training script
using 32 data-parallel workers.
.. _known_issues:
Known issues and limitations
----------------------------
The following are currently known issues:
- Long compilation times: this can be alleviated with
``neuron_parallel_compile`` tool to extract graphs from a short trial run and
compile them in parallel ahead of the actual run, as shown above.
- T5-Large compilation causing processes to get killed on trn1-2xl: It is recommended
to ``t5-large`` model training on a trn1-32xl machine, as it avoids CPU OOM and also provides
faster training by making use of 32 data-parallel workers.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-hf-t5-finetune:
Fine-tune T5 model on Trn1
================================
In this tutorial, we show how to fine-tune a Hugging Face (HF) T5 model
using HF trainer API. This example fine-tunes a `T5 model for
a text-summarization <https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization>`__ task on CNN/DailyMail dataset.
.. contents:: Table of Contents
:local:
:depth: 2
.. include:: ../note-performance.txt
Setup and compilation
---------------------
Before running the tutorial please follow the installation instructions at:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`
Please set the storage of instance to *512GB* or more if you also want to run through the BERT pretraining and GPT pretraining tutorials.
For all the commands below, make sure you are in the virtual environment that you have created above before you run the commands:
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
First we install a recent version of HF transformers, scikit-learn and evaluate packages in our environment as well as download the source matching the installed version. In this example, we chose version 4.26.0 and the text summarization example from HF transformers source:
.. code:: bash
export HF_VER=4.26.0
pip install -U transformers==$HF_VER datasets evaluate scikit-learn rouge_score pandas==1.4.0
cd ~/
git clone https://github.com/huggingface/transformers --branch v$HF_VER
cd ~/transformers/examples/pytorch/summarization
Single-worker training
----------------------
We will run text-summarization fine-tuning task following the example in
README.md located in the path
`~/transformers/examples/pytorch/summarization.`
We use full BF16 casting using `XLA_USE_BF16=1` to enable best
performance. First, paste the following script into your terminal to
create a “run.sh” file and change it to executable:
.. code:: ipython3
tee run.sh > /dev/null <<EOF
#!/bin/bash
if [ \$NEURON_PARALLEL_COMPILE == "1" ]
then
XLA_USE_BF16=1 python3 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--max_steps 100 \
--max_eval_samples 100 \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
else
XLA_USE_BF16=1 python3 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
fi
EOF
chmod +x run.sh
We optionally precompile the model and training script using
`neuron\_parallel\_compile <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html?highlight=neuron_parallel_compile>`__ to warm up the persistent graph cache (Neuron
Cache) such that the actual run has fewer compilations (faster run
time):
.. code:: ipython3
neuron_parallel_compile ./run.sh
Note: For these auto-regressive models, do not run the
``predict_with_generate`` method when doing the precompile step. This is
because the ``neuron_parallel_compile`` utility will run the training
script in graph extraction mode and no actual execution of the graph
will be done. Hence, the outputs at each step are invalid. Since the
auto-regressive generation at each step is dependent on output of
previous step, the generate step would fail since the outputs from
previous steps are invalid.
Precompilation is optional and only needs to be done once unless
hyperparameters such as batch size are modified. After the optional
precompilation, the actual run will be faster with minimal additional
compilations.
.. code:: ipython3
./run.sh
If precompilation was not done, the first execution of ./run.sh will be
slower due to serial compilations. Rerunning the same script a second
time would show quicker execution as the compiled graphs will be already
cached in persistent cache.
Running the above script will run the T5-small fine-tuning on a single
process.
**Note:** As you may have noticed, we are not running the
``predict_with_generate`` as part of training. This is because,
``predict_with_generate`` requires auto-regressive sampling where the
inputs to the decoder are created by appending outputs of previous
steps. This causes the inputs to the decoder to change shape and thereby
resulting in a new graph. In other words, the current ``generate`` api
provided by HF transformers leads to repeated compilations. We are working on
building a Neuron friendly version of ``generate`` api and it will be
made available as part of future release. This will enable us to run
``predict_with_generate`` as part of training script.
As a workaround, we can run the ``predict_with_generate`` on CPU after
the model is trained. Once training is completed, a trained checkpoint
would be saved. We can load the trained model and run the
``predict_with_generate`` to compute the final accuracy.
To do so, in run_summarization.py, add the following before ``transformers`` get imported.
This can be done by adding the below lines before all the ``imports``:
.. code:: ipython3
import libneuronxla
# Disable configuring xla env
def _configure_env():
pass
libneuronxla.configure_environment = _configure_env
You can now run the following and it should run the predict method on CPU device.
.. code:: ipython3
NEURON_NUM_DEVICES=0 python3 ./run_summarization.py \
--model_name_or_path <CHECKPOINT_DIR> \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_predict \
--predict_with_generate \
--source_prefix "summarize: " \
--per_device_eval_batch_size 4 \
--max_source_length 512 \
--pad_to_max_length \
--no_cuda \
--output_dir /tmp/tst-summarization |& tee log_run
Note: To run on CPU, we need to make sure that NEURON\_NUM\_DEVICES is
set to 0. This will make sure no xla\_devices are created and the
trainer would use the default device (CPU).
.. _multi_worker_training:
Multi-worker Training
---------------------
The above script will run one worker on one NeuronCore. To run on
multiple cores, first add these lines to top of run\_summarization.py to disable
Distributed Data Parallel (DDP) when using torchrun (see Known issues
and limitations section below):
.. code:: ipython3
# Disable DDP for torchrun
from transformers import __version__, Trainer
Trainer._wrap_model = lambda self, model, training=True, dataloader=None: model
Then launch the run\_summarization.py script with torchrun using
--nproc\_per\_node=N option to specify the number of workers (N=2 for
trn1.2xlarge, and N=2, 8, or 32 for trn1.32xlarge). The following
example runs 2 workers. Paste the following script into your terminal to
create a “run\_2w.sh” file and change it to executable:
.. code:: ipython3
tee run_2w.sh > /dev/null <<EOF
#!/bin/bash
if [ \$NEURON_PARALLEL_COMPILE == "1" ]
then
XLA_USE_BF16=1 torchrun --nproc_per_node=2 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--max_steps 100 \
--max_eval_samples 100 \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
else
XLA_USE_BF16=1 torchrun --nproc_per_node=2 ./run_summarization.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--gradient_accumulation_steps=32 \
--output_dir /tmp/tst-summarization |& tee log_run
fi
EOF
chmod +x run_2w.sh
Again, we optionally precompile the model and training script using
neuron\_parallel\_compile to warm up the persistent graph cache (Neuron
Cache), ignoring the results from this precompile run as it is only for
extracting and compiling the XLA graphs:
.. code:: ipython3
neuron_parallel_compile ./run_2w.sh
Precompilation is optional and only needs to be done once unless
hyperparameters such as batch size are modified. After the optional
precompilation, the actual run will be faster with minimal additional
compilations.
.. code:: ipython3
./run_2w.sh
During run, you will notice that the “Total train batch size” is now
8 and the “Total optimization steps” is now half the number for one
worker training. Also, if you open ``neuron-top`` in a separate terminal,
you should see 2 cores been utilized.
To train T5-large model, you can set the ``model_name_or_path`` argument to ``t5-large``.
Please note, currently running ``t5-large`` on trn1-2xl machine can result in ``HOST OOM`` during
compilation. Hence, it is recommended that you run a ``t5-large`` model training on a trn1-32xl machine.
On a trn1-32xl machine, you can create a run_32w.sh on the terminal using the following commands:
.. code:: ipython3
tee run_32w.sh > /dev/null <<EOF
#!/bin/bash
if [ \$NEURON_PARALLEL_COMPILE == "1" ]
then
XLA_USE_BF16=1 torchrun --nproc_per_node=32 ./run_summarization.py \
--model_name_or_path t5-large \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--max_steps 100 \
--max_eval_samples 100 \
--gradient_accumulation_steps=11 \
--output_dir /tmp/tst-summarization |& tee log_run
else
XLA_USE_BF16=1 torchrun --nproc_per_node=32 ./run_summarization.py \
--model_name_or_path t5-large \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--do_train \
--do_eval \
--source_prefix "summarize: " \
--max_source_length 512 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--overwrite_output_dir \
--pad_to_max_length \
--gradient_accumulation_steps=11 \
--output_dir /tmp/tst-summarization |& tee log_run
fi
EOF
chmod +x run_32w.sh
You can now follow the same steps as listed above. This script would run a t5-large model by launching a training script
using 32 data-parallel workers.
.. _known_issues:
Known issues and limitations
----------------------------
The following are currently known issues:
- Long compilation times: this can be alleviated with
``neuron_parallel_compile`` tool to extract graphs from a short trial run and
compile them in parallel ahead of the actual run, as shown above.
- T5-Large compilation causing processes to get killed on trn1-2xl: It is recommended
to ``t5-large`` model training on a trn1-32xl machine, as it avoids CPU OOM and also provides
faster training by making use of 32 data-parallel workers.
</pre></body></html>
|
2023-09-29T20:54:47.935Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/tutorials/customop-mlp-perf-opt.rst.txt
|
```
.. _neuronx-customop-mlp-perf:
Neuron Custom C++ Operators Performance Optimization
====================================================
In this tutorial, we will build on the small MLP model shown in :ref:`neuronx-customop-mlp-tutorial` and demonstrate methods to optimize the performance of a custom C++ operator. We will be taking advantage of the TCM accessor as well as the usage of multiple GPSIMD cores to enhance performance.
This tutorial assumes the reader has read and set up an environment described in :ref:`neuronx-customop-mlp-tutorial`.
.. contents:: Table of Contents
:local:
:depth: 2
Download Examples
-----------------
To download the source code for this tutorial, do:
.. code:: bash
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/inference/customop_mlp
.. note::
We will be using an inference example in this tutorial in order to adhere to certain Custom C++ operator restrictions when using multiple GPSIMD cores (see :ref:`custom-ops-api-ref-guide` for details on current restrictions).
.. note::
Custom C++ Operators are supported as of Neuron SDK Version 2.7 as a beta feature. As such this feature is not installed by default. Additional tooling and library packages (RPM and DEB) are required. On AL2, they can be installed with the following commands:
::
sudo yum remove python3-devel -y
sudo yum remove aws-neuronx-gpsimd-tools-0.* -y
sudo yum remove aws-neuronx-gpsimd-customop-lib-0.* -y
sudo yum install python3-devel -y
sudo yum install aws-neuronx-gpsimd-tools-0.* -y
sudo yum install aws-neuronx-gpsimd-customop-lib-0.* -y
On Ubuntu, they can be installed with the following commands:
::
sudo apt-get remove python3-dev -y
sudo apt-get remove aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get remove aws-neuronx-gpsimd-customop-lib=0.* -y
sudo apt-get install python3-dev -y
sudo apt-get install aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get install aws-neuronx-gpsimd-customop-lib=0.* -y
Activate the virtual environment created in :ref:`neuronx-customop-mlp-tutorial`,
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
As a reminder, ``ninja`` should be already installed in the virtual environment. If not, install it for PyTorch Custom Extensions in your environment by running:
.. code:: bash
pip install regex
pip install ninja
Model Configuration Adjustment
------------------------------
For this tutorial, we will enlarge the size of the hidden layer from ``[120, 84]`` to ``[4096, 2048]`` in ``model.py``.
.. code-block:: python
:emphasize-lines: 8
import torch
import torch.nn as nn
from torch.nn import functional as F
import my_ops
# Declare 3-layer MLP for MNIST dataset
class MLP(nn.Module):
def __init__(self, input_size = 28 * 28, output_size = 10, layers = [4096, 2048]):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, layers[0])
self.fc2 = nn.Linear(layers[0], layers[1])
self.fc3 = nn.Linear(layers[1], output_size)
def forward(self, x):
f1 = self.fc1(x)
r1 = my_ops.Relu.apply(f1)
f2 = self.fc2(r1)
r2 = my_ops.Relu.apply(f2)
f3 = self.fc3(r2)
return torch.log_softmax(f3, dim=1)
Performance with Element-wise Accessor
---------------------------------------
The ``neuron`` directory contains the same code shown in :ref:`neuronx-customop-mlp-tutorial`, where the ``relu_forward`` is implemented with element-wise accessor. Go to ``neuron`` directory, run ``build.py`` then ``inference.py``, the expected output on a trn1 instance is,
.. code-block:: bash
Inf throughput (iter/sec): 8.098649744235592
----------End Inference ---------------
Performance with TCM Accessor
-----------------------------
Now we switch to ``neuron-tcm`` folder. As mentioned in :ref:`custom-ops-api-ref-guide`, TCM accessors provide faster read and write performance. We implement the ``relu_forward`` using TCM accessor in ``relu.cpp``:
.. code-block:: c++
torch::Tensor relu_forward(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::zeros(t_in.sizes(), torch::kFloat);
static constexpr size_t buffer_size = 1024;
float *tcm_buffer = (float*)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
if (tcm_buffer != nullptr) {
auto t_in_tcm_acc = t_in.tcm_accessor();
auto t_out_tcm_acc = t_out.tcm_accessor();
for (size_t i = 0; i < num_elem; i += buffer_size) {
size_t remaining_elem = num_elem - i;
size_t copy_size = (remaining_elem > buffer_size) ? buffer_size : remaining_elem;
t_in_tcm_acc.tensor_to_tcm<float>(tcm_buffer, i, copy_size);
for (size_t j = 0; j < copy_size; j++) {
tcm_buffer[j] = tcm_buffer[j] > 0.0 ? tcm_buffer[j] : 0.0;
}
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, i, copy_size);
}
}
torch::neuron::tcm_free(tcm_buffer);
return t_out;
}
Run ``build.py`` then ``inference.py``, the expected output on a trn1 instance is:
.. code-block:: bash
Inf throughput (iter/sec): 220.73800131604054
----------End Inference ---------------
Extending the example to utilize multiple GPSIMD cores
------------------------------------------------------
Now we switch to the ``neuron-multicore`` folder. We first enable the usage of multiple GPSIMD cores by ``multicore=True`` in the ``build.py``.
.. code-block:: python
custom_op.load(
name='relu',
compute_srcs=['relu.cpp'],
shape_srcs=['shape.cpp'],
build_directory=os.getcwd(),
multicore=True,
verbose=True
)
After passing the flag, the kernel function ``relu_forward`` defined in ``relu.cpp`` will execute on all GPSIMD cores. Thus we need to use ``cpu_id`` to partiton the workload among all cores.
.. code-block:: c++
torch::Tensor relu_forward(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = get_dst_tensor();
uint32_t cpu_id = get_cpu_id();
uint32_t cpu_count = get_cpu_count();
uint32_t partition = num_elem / cpu_count;
if (cpu_id == cpu_count - 1) {
partition = num_elem - partition * (cpu_count - 1);
}
static constexpr size_t buffer_size = 1024;
float *tcm_buffer = (float*)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
if (tcm_buffer != nullptr) {
auto t_in_tcm_acc = t_in.tcm_accessor();
auto t_out_tcm_acc = t_out.tcm_accessor();
for (size_t i = 0; i < partition; i += buffer_size) {
size_t remaining_elem = partition - i;
size_t copy_size = (remaining_elem > buffer_size) ? buffer_size : remaining_elem;
t_in_tcm_acc.tensor_to_tcm<float>(tcm_buffer, partition *cpu_id + i, copy_size);
for (size_t j = 0; j < copy_size; j++) {
tcm_buffer[j] = tcm_buffer[j] > 0.0 ? tcm_buffer[j] : 0.0;
}
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, partition *cpu_id + i, copy_size);
}
}
torch::neuron::tcm_free(tcm_buffer);
return t_out;
}
There are two things noteworthy in the code:
1. We use ``cpu_id`` and ``cpu_count`` to distribute the workload among all cores. Particularly, each cores performs ``relu`` on a partition of the tensor, the offset is computed based on ``cpu_id``.
2. The output of the operator is directly written to the tensor from ``get_dst_tensor()``. The ``return t_out;`` statement is ignored during execution.
Run ``build.py`` then ``inference.py``, the expected output on a trn1 instance is:
.. code-block:: bash
Inf throughput (iter/sec): 269.936119707143
----------End Inference ---------------
Details of the API used in the sample here can be found in :ref:`custom-ops-api-ref-guide`.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-customop-mlp-perf:
Neuron Custom C++ Operators Performance Optimization
====================================================
In this tutorial, we will build on the small MLP model shown in :ref:`neuronx-customop-mlp-tutorial` and demonstrate methods to optimize the performance of a custom C++ operator. We will be taking advantage of the TCM accessor as well as the usage of multiple GPSIMD cores to enhance performance.
This tutorial assumes the reader has read and set up an environment described in :ref:`neuronx-customop-mlp-tutorial`.
.. contents:: Table of Contents
:local:
:depth: 2
Download Examples
-----------------
To download the source code for this tutorial, do:
.. code:: bash
git clone https://github.com/aws-neuron/aws-neuron-samples.git
cd aws-neuron-samples/torch-neuronx/inference/customop_mlp
.. note::
We will be using an inference example in this tutorial in order to adhere to certain Custom C++ operator restrictions when using multiple GPSIMD cores (see :ref:`custom-ops-api-ref-guide` for details on current restrictions).
.. note::
Custom C++ Operators are supported as of Neuron SDK Version 2.7 as a beta feature. As such this feature is not installed by default. Additional tooling and library packages (RPM and DEB) are required. On AL2, they can be installed with the following commands:
::
sudo yum remove python3-devel -y
sudo yum remove aws-neuronx-gpsimd-tools-0.* -y
sudo yum remove aws-neuronx-gpsimd-customop-lib-0.* -y
sudo yum install python3-devel -y
sudo yum install aws-neuronx-gpsimd-tools-0.* -y
sudo yum install aws-neuronx-gpsimd-customop-lib-0.* -y
On Ubuntu, they can be installed with the following commands:
::
sudo apt-get remove python3-dev -y
sudo apt-get remove aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get remove aws-neuronx-gpsimd-customop-lib=0.* -y
sudo apt-get install python3-dev -y
sudo apt-get install aws-neuronx-gpsimd-tools=0.* -y
sudo apt-get install aws-neuronx-gpsimd-customop-lib=0.* -y
Activate the virtual environment created in :ref:`neuronx-customop-mlp-tutorial`,
.. code:: shell
source ~/aws_neuron_venv_pytorch/bin/activate
As a reminder, ``ninja`` should be already installed in the virtual environment. If not, install it for PyTorch Custom Extensions in your environment by running:
.. code:: bash
pip install regex
pip install ninja
Model Configuration Adjustment
------------------------------
For this tutorial, we will enlarge the size of the hidden layer from ``[120, 84]`` to ``[4096, 2048]`` in ``model.py``.
.. code-block:: python
:emphasize-lines: 8
import torch
import torch.nn as nn
from torch.nn import functional as F
import my_ops
# Declare 3-layer MLP for MNIST dataset
class MLP(nn.Module):
def __init__(self, input_size = 28 * 28, output_size = 10, layers = [4096, 2048]):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, layers[0])
self.fc2 = nn.Linear(layers[0], layers[1])
self.fc3 = nn.Linear(layers[1], output_size)
def forward(self, x):
f1 = self.fc1(x)
r1 = my_ops.Relu.apply(f1)
f2 = self.fc2(r1)
r2 = my_ops.Relu.apply(f2)
f3 = self.fc3(r2)
return torch.log_softmax(f3, dim=1)
Performance with Element-wise Accessor
---------------------------------------
The ``neuron`` directory contains the same code shown in :ref:`neuronx-customop-mlp-tutorial`, where the ``relu_forward`` is implemented with element-wise accessor. Go to ``neuron`` directory, run ``build.py`` then ``inference.py``, the expected output on a trn1 instance is,
.. code-block:: bash
Inf throughput (iter/sec): 8.098649744235592
----------End Inference ---------------
Performance with TCM Accessor
-----------------------------
Now we switch to ``neuron-tcm`` folder. As mentioned in :ref:`custom-ops-api-ref-guide`, TCM accessors provide faster read and write performance. We implement the ``relu_forward`` using TCM accessor in ``relu.cpp``:
.. code-block:: c++
torch::Tensor relu_forward(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::zeros(t_in.sizes(), torch::kFloat);
static constexpr size_t buffer_size = 1024;
float *tcm_buffer = (float*)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
if (tcm_buffer != nullptr) {
auto t_in_tcm_acc = t_in.tcm_accessor();
auto t_out_tcm_acc = t_out.tcm_accessor();
for (size_t i = 0; i < num_elem; i += buffer_size) {
size_t remaining_elem = num_elem - i;
size_t copy_size = (remaining_elem > buffer_size) ? buffer_size : remaining_elem;
t_in_tcm_acc.tensor_to_tcm<float>(tcm_buffer, i, copy_size);
for (size_t j = 0; j < copy_size; j++) {
tcm_buffer[j] = tcm_buffer[j] > 0.0 ? tcm_buffer[j] : 0.0;
}
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, i, copy_size);
}
}
torch::neuron::tcm_free(tcm_buffer);
return t_out;
}
Run ``build.py`` then ``inference.py``, the expected output on a trn1 instance is:
.. code-block:: bash
Inf throughput (iter/sec): 220.73800131604054
----------End Inference ---------------
Extending the example to utilize multiple GPSIMD cores
------------------------------------------------------
Now we switch to the ``neuron-multicore`` folder. We first enable the usage of multiple GPSIMD cores by ``multicore=True`` in the ``build.py``.
.. code-block:: python
custom_op.load(
name='relu',
compute_srcs=['relu.cpp'],
shape_srcs=['shape.cpp'],
build_directory=os.getcwd(),
multicore=True,
verbose=True
)
After passing the flag, the kernel function ``relu_forward`` defined in ``relu.cpp`` will execute on all GPSIMD cores. Thus we need to use ``cpu_id`` to partiton the workload among all cores.
.. code-block:: c++
torch::Tensor relu_forward(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = get_dst_tensor();
uint32_t cpu_id = get_cpu_id();
uint32_t cpu_count = get_cpu_count();
uint32_t partition = num_elem / cpu_count;
if (cpu_id == cpu_count - 1) {
partition = num_elem - partition * (cpu_count - 1);
}
static constexpr size_t buffer_size = 1024;
float *tcm_buffer = (float*)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
if (tcm_buffer != nullptr) {
auto t_in_tcm_acc = t_in.tcm_accessor();
auto t_out_tcm_acc = t_out.tcm_accessor();
for (size_t i = 0; i < partition; i += buffer_size) {
size_t remaining_elem = partition - i;
size_t copy_size = (remaining_elem > buffer_size) ? buffer_size : remaining_elem;
t_in_tcm_acc.tensor_to_tcm<float>(tcm_buffer, partition *cpu_id + i, copy_size);
for (size_t j = 0; j < copy_size; j++) {
tcm_buffer[j] = tcm_buffer[j] > 0.0 ? tcm_buffer[j] : 0.0;
}
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, partition *cpu_id + i, copy_size);
}
}
torch::neuron::tcm_free(tcm_buffer);
return t_out;
}
There are two things noteworthy in the code:
1. We use ``cpu_id`` and ``cpu_count`` to distribute the workload among all cores. Particularly, each cores performs ``relu`` on a partition of the tensor, the offset is computed based on ``cpu_id``.
2. The output of the operator is directly written to the tensor from ``get_dst_tensor()``. The ``return t_out;`` statement is ignored during execution.
Run ``build.py`` then ``inference.py``, the expected output on a trn1 instance is:
.. code-block:: bash
Inf throughput (iter/sec): 269.936119707143
----------End Inference ---------------
Details of the API used in the sample here can be found in :ref:`custom-ops-api-ref-guide`.
</pre></body></html>
|
2023-09-29T20:54:48.070Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/additional-examples-training.rst.txt
|
```
Additional Examples (``torch-neuronx``)
=======================================
.. toctree::
:maxdepth: 1
:hidden:
AWS Neuron Reference for Nemo Megatron GitHub Repository <https://github.com/aws-neuron/neuronx-nemo-megatron>
AWS Neuron Samples for EKS <https://github.com/aws-neuron/aws-neuron-eks-samples>
AWS Neuron Samples for AWS ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>
AWS Neuron Samples GitHub Repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training>
.. include:: /frameworks/torch/torch-neuronx/additional-examples-training.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Additional Examples (``torch-neuronx``)
=======================================
.. toctree::
:maxdepth: 1
:hidden:
AWS Neuron Reference for Nemo Megatron GitHub Repository <https://github.com/aws-neuron/neuronx-nemo-megatron>
AWS Neuron Samples for EKS <https://github.com/aws-neuron/aws-neuron-eks-samples>
AWS Neuron Samples for AWS ParallelCluster <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples>
AWS Neuron Samples GitHub Repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training>
.. include:: /frameworks/torch/torch-neuronx/additional-examples-training.txt
</pre></body></html>
|
2023-09-29T20:54:48.106Z
|
|
Install PyTorch Neuron (torch-neuronx) — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/torch/torch-neuronx/setup/pytorch-install.html#pytorch-neuronx-install
|
# Install PyTorch Neuron (torch-neuronx) — AWS Neuron Documentation
Table of Contents
- [Develop on AWS ML accelerator instance](#develop-on-aws-ml-accelerator-instance)
## [Develop on AWS ML accelerator instance](#id1)[#](#develop-on-aws-ml-accelerator-instance "Permalink to this headline")
PyTorch 1.13.1
Amazon Linux 2 DLAMI Base
Note
- For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.
- When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
- While launching the instance, please use the AMI with the name `Deep Learning Base Neuron AMI (Amazon Linux 2) <Latest_Date>`.
- To launch an instance using a specific AMI, please refer to the instructions mentioned [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console).
```
# Install Python venv
sudo yum install -y python3.7-venv gcc-c++
# Create Python venv
python3.7 -m venv aws_neuron_venv_pytorch
# Activate Python venv
source aws_neuron_venv_pytorch/bin/activate
python -m pip install -U pip
# Install Jupyter notebook kernel
pip install ipykernel
python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install wget, awscli
python -m pip install wget
python -m pip install awscli
# Install Neuron Compiler and Framework
python -m pip install neuronx-cc==2.* torch-neuronx torchvision
```
Ubuntu 20 DLAMI Base
Note
- For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.
- When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
- While launching the instance, please use the AMI with the name `Deep Learning Base Neuron AMI (Ubuntu 20.04) <Latest_Date>`.
- To launch an instance using a specific AMI, please refer to the instructions mentioned [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console).
```
# Install Python venv
sudo apt-get install -y python3.8-venv g++
# Create Python venv
python3.8 -m venv aws_neuron_venv_pytorch
# Activate Python venv
source aws_neuron_venv_pytorch/bin/activate
python -m pip install -U pip
# Install Jupyter notebook kernel
pip install ipykernel
python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install wget, awscli
python -m pip install wget
python -m pip install awscli
# Install Neuron Compiler and Framework
python -m pip install neuronx-cc==2.* torch-neuronx torchvision
```
Amazon Linux 2 DLAMI Pytorch
Note
- When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
- While launching the instance, please use the AMI with the name `Deep Learning AMI Neuron PyTorch 1.13.1 (Amazon Linux 2) <Latest_Date>`.
- To launch an instance using a specific AMI, please refer to the instructions mentioned [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console).
```
# Activate Python venv
source /opt/aws_neuron_venv_pytorch/bin/activate
# Install Jupyter notebook kernel
pip install ipykernel
python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Update Neuron Compiler and Framework
python -m pip install --upgrade neuronx-cc==2.* torch-neuronx torchvision
```
Ubuntu 20 DLAMI Pytorch
Note
- When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
- While launching the instance, please use the AMI with the name `Deep Learning AMI Neuron PyTorch 1.13.1 (Ubuntu 20.04) <Latest_Date>`.
- To launch an instance using a specific AMI, please refer to the instructions mentioned [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console).
```
# Activate Python venv
source /opt/aws_neuron_venv_pytorch/bin/activate
# Install Jupyter notebook kernel
pip install ipykernel
python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Update Neuron Compiler and Framework
python -m pip install --upgrade neuronx-cc==2.* torch-neuronx torchvision
```
Amazon Linux 2
Note
- Please refer to the instructions under the tab `Amazon Linux 2 DLAMI Base`.
Ubuntu 20
Note
- Please refer to the instructions under the tab `Ubuntu 20 DLAMI Base`.
_This document is relevant for_: `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Install PyTorch Neuron (torch-neuronx) — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/torch/torch-neuronx/setup/pytorch-install", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/torch/torch-neuronx/setup/pytorch-install.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/torch/torch-neuronx/setup/pytorch-install.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/frameworks/torch/torch-neuronx/setup/pytorch-install.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance">
Develop on AWS ML accelerator instance
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Install PyTorch Neuron (torch-neuronx)</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#develop-on-aws-ml-accelerator-instance">
Develop on AWS ML accelerator instance
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="install-pytorch-neuron-torch-neuronx">
<span id="pytorch-neuronx-install"></span><h1>Install PyTorch Neuron (<code class="docutils literal notranslate"><span class="pre">torch-neuronx</span></code>)<a class="headerlink" href="#install-pytorch-neuron-torch-neuronx" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of Contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#develop-on-aws-ml-accelerator-instance" id="id1">Develop on AWS ML accelerator instance</a></p></li>
</ul>
</div>
<div class="section" id="develop-on-aws-ml-accelerator-instance">
<h2><a class="toc-backref" href="#id1">Develop on AWS ML accelerator instance</a><a class="headerlink" href="#develop-on-aws-ml-accelerator-instance" title="Permalink to this headline">#</a></h2>
<div class="sd-tab-set docutils">
<input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio">
<label class="sd-tab-label" for="sd-tab-item-0">
PyTorch 1.13.1</label><div class="sd-tab-content docutils">
<div class="sd-tab-set docutils">
<input checked="checked" id="sd-tab-item-1" name="sd-tab-set-1" type="radio">
<label class="sd-tab-label" for="sd-tab-item-1">
Amazon Linux 2 DLAMI Base</label><div class="sd-tab-content docutils">
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p></li>
<li><p>When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.</p></li>
<li><p>While launching the instance, please use the AMI with the name <code class="docutils literal notranslate"><span class="pre">Deep</span> <span class="pre">Learning</span> <span class="pre">Base</span> <span class="pre">Neuron</span> <span class="pre">AMI</span> <span class="pre">(Amazon</span> <span class="pre">Linux</span> <span class="pre">2)</span> <span class="pre"><Latest_Date></span></code>.</p></li>
<li><p>To launch an instance using a specific AMI, please refer to the instructions mentioned <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console">here</a>.</p></li>
</ul>
</div>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv
sudo yum install -y python3.7-venv gcc-c++
# Create Python venv
python3.7 -m venv aws_neuron_venv_pytorch
# Activate Python venv
source aws_neuron_venv_pytorch/bin/activate
python -m pip install -U pip
# Install Jupyter notebook kernel
pip install ipykernel
python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install wget, awscli
python -m pip install wget
python -m pip install awscli
# Install Neuron Compiler and Framework
python -m pip install neuronx-cc==2.* torch-neuronx torchvision
</pre></div>
</div>
</div>
<input id="sd-tab-item-2" name="sd-tab-set-1" type="radio">
<label class="sd-tab-label" for="sd-tab-item-2">
Ubuntu 20 DLAMI Base</label><div class="sd-tab-content docutils">
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>For a successful installation or update, execute each line of the instructions below separately or copy the contents of the code block into a script file and source its contents.</p></li>
<li><p>When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.</p></li>
<li><p>While launching the instance, please use the AMI with the name <code class="docutils literal notranslate"><span class="pre">Deep</span> <span class="pre">Learning</span> <span class="pre">Base</span> <span class="pre">Neuron</span> <span class="pre">AMI</span> <span class="pre">(Ubuntu</span> <span class="pre">20.04)</span> <span class="pre"><Latest_Date></span></code>.</p></li>
<li><p>To launch an instance using a specific AMI, please refer to the instructions mentioned <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console">here</a>.</p></li>
</ul>
</div>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv
sudo apt-get install -y python3.8-venv g++
# Create Python venv
python3.8 -m venv aws_neuron_venv_pytorch
# Activate Python venv
source aws_neuron_venv_pytorch/bin/activate
python -m pip install -U pip
# Install Jupyter notebook kernel
pip install ipykernel
python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install wget, awscli
python -m pip install wget
python -m pip install awscli
# Install Neuron Compiler and Framework
python -m pip install neuronx-cc==2.* torch-neuronx torchvision
</pre></div>
</div>
</div>
<input id="sd-tab-item-3" name="sd-tab-set-1" type="radio">
<label class="sd-tab-label" for="sd-tab-item-3">
Amazon Linux 2 DLAMI Pytorch</label><div class="sd-tab-content docutils">
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.</p></li>
<li><p>While launching the instance, please use the AMI with the name <code class="docutils literal notranslate"><span class="pre">Deep</span> <span class="pre">Learning</span> <span class="pre">AMI</span> <span class="pre">Neuron</span> <span class="pre">PyTorch</span> <span class="pre">1.13.1</span> <span class="pre">(Amazon</span> <span class="pre">Linux</span> <span class="pre">2)</span> <span class="pre"><Latest_Date></span></code>.</p></li>
<li><p>To launch an instance using a specific AMI, please refer to the instructions mentioned <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console">here</a>.</p></li>
</ul>
</div>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv
source /opt/aws_neuron_venv_pytorch/bin/activate
# Install Jupyter notebook kernel
pip install ipykernel
python3.7 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Update Neuron Compiler and Framework
python -m pip install --upgrade neuronx-cc==2.* torch-neuronx torchvision
</pre></div>
</div>
</div>
<input id="sd-tab-item-4" name="sd-tab-set-1" type="radio">
<label class="sd-tab-label" for="sd-tab-item-4">
Ubuntu 20 DLAMI Pytorch</label><div class="sd-tab-content docutils">
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.</p></li>
<li><p>While launching the instance, please use the AMI with the name <code class="docutils literal notranslate"><span class="pre">Deep</span> <span class="pre">Learning</span> <span class="pre">AMI</span> <span class="pre">Neuron</span> <span class="pre">PyTorch</span> <span class="pre">1.13.1</span> <span class="pre">(Ubuntu</span> <span class="pre">20.04)</span> <span class="pre"><Latest_Date></span></code>.</p></li>
<li><p>To launch an instance using a specific AMI, please refer to the instructions mentioned <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/finding-an-ami.html#finding-an-ami-console">here</a>.</p></li>
</ul>
</div>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Activate Python venv
source /opt/aws_neuron_venv_pytorch/bin/activate
# Install Jupyter notebook kernel
pip install ipykernel
python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch --display-name "Python (torch-neuronx)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Update Neuron Compiler and Framework
python -m pip install --upgrade neuronx-cc==2.* torch-neuronx torchvision
</pre></div>
</div>
</div>
<input id="sd-tab-item-5" name="sd-tab-set-1" type="radio">
<label class="sd-tab-label" for="sd-tab-item-5">
Amazon Linux 2</label><div class="sd-tab-content docutils">
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>Please refer to the instructions under the tab <code class="docutils literal notranslate"><span class="pre">Amazon</span> <span class="pre">Linux</span> <span class="pre">2</span> <span class="pre">DLAMI</span> <span class="pre">Base</span></code>.</p></li>
</ul>
</div>
</div>
<input id="sd-tab-item-6" name="sd-tab-set-1" type="radio">
<label class="sd-tab-label" for="sd-tab-item-6">
Ubuntu 20</label><div class="sd-tab-content docutils">
<div class="admonition note">
<p class="admonition-title">Note</p>
<ul class="simple">
<li><p>Please refer to the instructions under the tab <code class="docutils literal notranslate"><span class="pre">Ubuntu</span> <span class="pre">20</span> <span class="pre">DLAMI</span> <span class="pre">Base</span></code>.</p></li>
</ul>
</div>
</div>
</div>
</div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:48.290Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.rst.txt
|
```
.. _torch-neuronx-profiling-api:
PyTorch Neuron (``torch-neuronx``) Profiling API
================================================
.. contents:: Table of Contents
:local:
:depth: 2
The profiler provides a method to generate a context manager to capture
trace events at the operator or runtime level.
.. py:function:: torch_neuronx.experimental.profiler.profile(port=9012,ms_duration=60000,neuron_tensorboard_plugin_dir="logs/plugins/neuron",profile_type="operator",auto_start=True,delete_working=True)
The :func:`torch_neuronx.experimental.profiler.profile` method retuns a ``profile`` context manager object. This object
doesn't need to be used directly, as default options are set to auto capture events based on the ``profile_type``.
The context manager will wrap around the entire model
and training/inference loop. The context-manager is
backwards-compatible with the torch_xla.debug.profiler``
*Required Arguments*
None
*Optional Keyword Arguments*
:keyword int port: Port to run the profiling GRPC server on. Default is 9012.
:keyword int ms_duration: This defines how long the profiler will capture the
HLO artifacts from the model to view in the profiler. The unit is in
milliseconds. The default value is 60000 ms, or 1 minute.
:keyword str neuron_tensorboard_plugin_dir: The directory the neuron tensorboard plugin will file write to.
This will be ``logs/plugins/neuron`` by default/
:keyword str profile_type: There is “trace” and “operator”. “trace”
is the Torch Runtime Trace Level, while “operator” is the Model
Operator Trace Level. Default is "operator"
:keyword bool auto_start: If set to true, the profiler will start profiling immediately.
If set to false, the profiler can be set to start at a later condition.
Refer to ``profile.start()`` for more details. Default is ``True``.
:keyword bool delete_working: If set to False turns off the deletion of temporary files. Default True.
:keyword str traced_only: This should be set to ``True`` if profiling a model that has been traced with
``torch_neuronx.trace()``. Default is ``False``.
:returns: The traced :class:`profile`
:rtype: ~profile
.. py:function:: torch_neuronx.experimental.profiler.profile.start()
The :func:`torch_neuronx.experimental.profiler.profile.start` method starts the profiler if not started (i.e when ``auto_start=False``).
This function does not take in any parameters, nor return anything.
*Required Arguments*
None
*Optional Keyword Arguments*
None
:returns: None
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-neuronx-profiling-api:
PyTorch Neuron (``torch-neuronx``) Profiling API
================================================
.. contents:: Table of Contents
:local:
:depth: 2
The profiler provides a method to generate a context manager to capture
trace events at the operator or runtime level.
.. py:function:: torch_neuronx.experimental.profiler.profile(port=9012,ms_duration=60000,neuron_tensorboard_plugin_dir="logs/plugins/neuron",profile_type="operator",auto_start=True,delete_working=True)
The :func:`torch_neuronx.experimental.profiler.profile` method retuns a ``profile`` context manager object. This object
doesn't need to be used directly, as default options are set to auto capture events based on the ``profile_type``.
The context manager will wrap around the entire model
and training/inference loop. The context-manager is
backwards-compatible with the torch_xla.debug.profiler``
*Required Arguments*
None
*Optional Keyword Arguments*
:keyword int port: Port to run the profiling GRPC server on. Default is 9012.
:keyword int ms_duration: This defines how long the profiler will capture the
HLO artifacts from the model to view in the profiler. The unit is in
milliseconds. The default value is 60000 ms, or 1 minute.
:keyword str neuron_tensorboard_plugin_dir: The directory the neuron tensorboard plugin will file write to.
This will be ``logs/plugins/neuron`` by default/
:keyword str profile_type: There is “trace” and “operator”. “trace”
is the Torch Runtime Trace Level, while “operator” is the Model
Operator Trace Level. Default is "operator"
:keyword bool auto_start: If set to true, the profiler will start profiling immediately.
If set to false, the profiler can be set to start at a later condition.
Refer to ``profile.start()`` for more details. Default is ``True``.
:keyword bool delete_working: If set to False turns off the deletion of temporary files. Default True.
:keyword str traced_only: This should be set to ``True`` if profiling a model that has been traced with
``torch_neuronx.trace()``. Default is ``False``.
:returns: The traced :class:`profile`
:rtype: ~profile
.. py:function:: torch_neuronx.experimental.profiler.profile.start()
The :func:`torch_neuronx.experimental.profiler.profile.start` method starts the profiler if not started (i.e when ``auto_start=False``).
This function does not take in any parameters, nor return anything.
*Required Arguments*
None
*Optional Keyword Arguments*
None
:returns: None</pre></body></html>
|
2023-09-29T20:54:48.313Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/misc-training.rst.txt
|
```
Misc (Training - torch-neuronx)
===============================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators
/frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution
/frameworks/torch/torch-neuronx/training-troubleshooting
/release-notes/torch/torch-neuronx/index
.. include:: /frameworks/torch/torch-neuronx/misc-training.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (Training - torch-neuronx)
===============================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators
/frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution
/frameworks/torch/torch-neuronx/training-troubleshooting
/release-notes/torch/torch-neuronx/index
.. include:: /frameworks/torch/torch-neuronx/misc-training.txt </pre></body></html>
|
2023-09-29T20:54:48.417Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/api-reference-guide/training/index.rst.txt
|
```
API Reference Guide for Training (``torch-neuronx``)
====================================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile
/frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars
/general/arch/neuron-features/neuron-caching
/frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api
.. dropdown:: API Reference Guide for Training (``torch-neuronx``)
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
.. include:: /frameworks/torch/torch-neuronx/api-reference-guide/training/index.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide for Training (``torch-neuronx``)
====================================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile
/frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars
/general/arch/neuron-features/neuron-caching
/frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api
.. dropdown:: API Reference Guide for Training (``torch-neuronx``)
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
.. include:: /frameworks/torch/torch-neuronx/api-reference-guide/training/index.txt
</pre></body></html>
|
2023-09-29T20:54:48.428Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.rst.txt
|
```
.. _pytorch-neuronx-parallel-compile-cli:
PyTorch Neuron neuron_parallel_compile CLI (``torch-neuronx``)
==============================================================
PyTorch Neuron performs just-in-time compilation of graphs during
execution. At every step, a graph is traced. If the traced graph varies
from the previous executions, it is compiled by the neuron compiler. For
large models, the compilation time for each graph can be high. Moreover,
because of JIT, we would compile all these graphs sequentially, hence
incurring huge compilation penalty.
To reduce this compilation time during execution, the ``neuron_parallel_compile``
utility is provided as part of PyTorch Neuron installation. The
``neuron_parallel_compile`` will extract graphs from a trial run of your script,
perform parallel pre-compilation of the graphs, and populate the :ref:`Neuron Persistent Cache <neuron-caching>`
on disk or in AWS S3 bucket with compiled graphs.
Your trial run should be limited to a few steps
(eg.10-15), enough for the utility to extract the different graphs needed for
full execution. To run the utility:
``neuron_parallel_compile <run commands>``
Where ``<run commands>`` are the commands to run a short run (i.e. 10
steps) to trace training loops for pre-compilation. The example for
the run command is ``torchrun --nproc_per_node=2 <train script>``, where
train script accepts ``--steps_this_run`` option to limit number of run steps:
``neuron_parallel_compile torchrun --nproc_per_node=2 <train script> --steps_this_run=10``
You may notice that the output from the model is invalid when you use
``neuron_parallel_compile``. This is because when you initiate your training
run command with ``neuron_parallel_compile``, the utility will run your command
with environment variables that puts your training script into graph
extraction mode. In this mode, no real execution is performed and the outputs
are invalid. You will also see outputs similar to the following about the compile cache path and the
extracted graphs:
.. code:: bash
INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
INFO ||NEURON_CC_WRAPPER||: Extracting graphs (/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_9219523464496887986+abb26765/model.hlo.pb) for ahead-of-time parallel compilation. No compilation was done.
After the trial execution ends and the graphs are extracted, ``neuron_parallel_compile`` would launch multiple compilation processes in parallel to compile all these graphs. Compiled graphs (NEFFs) are inserted into the Neuron Persistent Cache. You will also see outputs similar to the following about the compile cache path, the list of graphs (HLOs) to be compiled, and the running statistics of compiled graphs (count of remaining graphs, locked graphs, failed graphs, done compiled graphs).
.. code:: bash
INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
INFO ||NEURON_CACHE||: Current remaining items are 5, locked are 0, failed are 0, done are 0, total is 5
INFO ||NEURON_PARALLEL_COMPILE||: master grab hlos to compile: ['/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_8068656800389078395+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_17109392703413819652+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_9219523464496887986+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_16969875447143373016+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_3000743782456078279+abb26765/model.hlo.pb']
...
INFO ||NEURON_CACHE||: Current remaining items are 0, locked are 0, failed are 0, done are 5, total is 5
After all compilations are completed, a compilation summary is shown:
.. code:: bash
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: {
INFO: "compilation_summary": {
INFO: "true": 2
INFO: },
INFO: "compilation_report": {
INFO: "/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_1970132581169579119+abb26765/model.hlo.pb": {
INFO: "status": true,
INFO: "retry": 0
INFO: },
INFO: "/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_16141953836240613513+abb26765/model.hlo.pb": {
INFO: "status": true,
INFO: "retry": 0
INFO: }
INFO: }
INFO: }
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: Total graphs: 2
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: Total successful compilations: 2
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: Total failed compilations: 0
Now if you run your script (without ``neuron_parallel_compile``), it will be faster
since the compiled graphs are already cached.
``torchrun --nproc_per_node=2 <train script>``
``Note``: Except for the option to limit number of run steps (such as ``--steps_this_run``),
the other options of ``<run commands>`` must match between the pre-compilation and
actual run. If this is not the case, you may see additional compilations during training
run because of new graphs getting generated, resulting in cache miss.
There may be additional compilations due to unreached execution paths (in case the
execution path is not reached in the first few steps of graph extraction), or changes
in parameters such as number of data parallel workers.
Each precompilation command or actual script execution command above can be prefixed with ``NEURON_COMPILE_CACHE_URL=<cache URL>`` or ``NEURON_CC_FLAGS="--cache_dir=<cache URL>"`` to specify a different cache location than the default (with ``--cache_dir`` taking precedence over ``NEURON_COMPILE_CACHE_URL`` if both are specified). Alternatively, the cache URL can also be specify in Python code using:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + "--cache_dir=<cache URL>"
You need to specify the same cache URL for both the precompilation command (using ``neuron_parallel_compile``) and the actual script execution command if you want the previously compiled and cached graphs to be used for actual script execution.
The environment variables below are available to help modify ``neuron_parallel_compile`` behavior:
``NEURON_PARALLEL_COMPILE_MAX_RETRIES`` :
- Set the maximum number of retries when using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If set to N, the tool will try compilation N more time(s) if the first graph compilation
failed. Example: Set NEURON_PARALLEL_COMPILE_MAX_RETRIES=1 when precompiling on
trn1.2xlarge where there's limited host memory and CPU resources.
Default is 0.
``NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE`` :
- When using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>` , if you want to ignore the error in training script
and compile the accumulated HLO graphs, you can do so by setting this environment variable.
Example: If NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE=1 is set when using ``neuron_parallel_compile``,
a crash in the training script would be ignored and the graphs collected upto the crash would be
compiled.
``NEURON_COMPILE_CACHE_URL``:
- Set the :ref:`Neuron Persistent Cache <neuron-caching>` URL or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If starts with ``s3://``, it will use AWS S3 as cache backend. Otherwise it will use
local disk cache. Default is ``/var/tmp/neuron-compile-cache``.
If this is specified together with ``cache_dir=<cache_url>`` option via ``NEURON_CC_FLAGS``, the ``--cache_dir`` option takes precedence.
Debugging with Neuron Persistent Cache
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A graph compilation can fail because of a compilation error or an environment issue (for example, compilation is interrupted by ctrl-C). The graph would be marked as failed and subsequent rerun would encounter message like below:
.. code:: bash
INFO ||NCC_WRAPPER||: Got a cached failed neff at /var/tmp/neuron-compile-cache/neuronxcc-2.8.0.25+a3ad0f342/MODULE_12486829708343293975+d41d8cd9/model.neff. Will skip compilation, please set --retry_failed_compilation for recompilation.
To retry compilation,
add ``--retry_failed_compilation`` in ``NEURON_CC_FLAGS`` environment variable. This will retry the compilation even if the graph was previously marked as failed compilation.
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --retry_failed_compilation'
See :ref:`Neuron Persistent Cache <neuron-caching>` for more information.
Separate collection and compilation commands
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For cases like finetuning, there could be multiple independent training tasks running on different nodes
and sharing many compilation graphs in common. ``neuron_parallel_compile`` provides commands to separate
the graph collection and compilation phases, so users can collect all graphs across different training sessions in advance to avoid duplicate compilations.
To only collect the graphs from trial executions of training scripts into Neuron Persistent Cache:
.. code:: bash
neuron_parallel_compile --command collect <run_script>
To compile the graph previously collected using ``collect`` command and store compiled result (NEFFs) back into Neuron Persistent Cache (make sure to use the same neuronx-cc compiler version as during the graph collection step):
.. code:: bash
``neuron_parallel_compile --command compile <run_script>``
Note: if ``--command`` is not specified, ``neuron_parallel_compile`` will do both collection and compilation phases by default.
Cache maintenance commands
~~~~~~~~~~~~~~~~~~~~~~~~~~
The following commands are available to help maintain the cache.
.. warning::
Make sure no running process is using the cache when you use ``clean`` or ``clear-locks`` command because it can cause cache errors.
To clean cached files:
.. code:: bash
# WARNING: Make sure no running process is using the cache
neuron_parallel_compile --command clean
To clear file locks left behind when a ``neuron_parallel_compile`` execution was interrupted:
.. code:: bash
# WARNING: Make sure no running process is using the cache
neuron_parallel_compile --command clear-locks
Each command above can be prefixed with ``NEURON_COMPILE_CACHE_URL=<cache URL>`` or ``NEURON_CC_FLAGS="--cache_dir=<cache URL>"`` to specify a different cache location than the default.
.. note::
Currently there's no automatic maintenance of cache size either on disk or in S3. Please delete files (i.e. older compiler versions) as necessary to keep cache size within your limit.
Analyze operations support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The analyze command checks the support of operations within the training script by checking each operator against neuronx-cc.
It is only supported for PyTorch models. The output of the tool will be available as result.json within the output location.
.. code:: bash
neuron_parallel_compile --command analyze python3 training_script.py
Optional Arguments:
``--analyze-output ANALYZE_OUTPUT_LOCATION``
Only supported for --command analyze. Path to location where output will be persisted.
Default: cwd/model_analysis_result
``--analyze-verbosity {1,2}``
Only supported for --command analyze. Level of information to be included within the output.
1: add XLA operator information into the results.
2: add aten metadata into results.
Default: 2
The tutorial for ``analyze`` can be found :ref:`here <torch-analyze-for-training-tutorial>`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuronx-parallel-compile-cli:
PyTorch Neuron neuron_parallel_compile CLI (``torch-neuronx``)
==============================================================
PyTorch Neuron performs just-in-time compilation of graphs during
execution. At every step, a graph is traced. If the traced graph varies
from the previous executions, it is compiled by the neuron compiler. For
large models, the compilation time for each graph can be high. Moreover,
because of JIT, we would compile all these graphs sequentially, hence
incurring huge compilation penalty.
To reduce this compilation time during execution, the ``neuron_parallel_compile``
utility is provided as part of PyTorch Neuron installation. The
``neuron_parallel_compile`` will extract graphs from a trial run of your script,
perform parallel pre-compilation of the graphs, and populate the :ref:`Neuron Persistent Cache <neuron-caching>`
on disk or in AWS S3 bucket with compiled graphs.
Your trial run should be limited to a few steps
(eg.10-15), enough for the utility to extract the different graphs needed for
full execution. To run the utility:
``neuron_parallel_compile <run commands>``
Where ``<run commands>`` are the commands to run a short run (i.e. 10
steps) to trace training loops for pre-compilation. The example for
the run command is ``torchrun --nproc_per_node=2 <train script>``, where
train script accepts ``--steps_this_run`` option to limit number of run steps:
``neuron_parallel_compile torchrun --nproc_per_node=2 <train script> --steps_this_run=10``
You may notice that the output from the model is invalid when you use
``neuron_parallel_compile``. This is because when you initiate your training
run command with ``neuron_parallel_compile``, the utility will run your command
with environment variables that puts your training script into graph
extraction mode. In this mode, no real execution is performed and the outputs
are invalid. You will also see outputs similar to the following about the compile cache path and the
extracted graphs:
.. code:: bash
INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
INFO ||NEURON_CC_WRAPPER||: Extracting graphs (/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_9219523464496887986+abb26765/model.hlo.pb) for ahead-of-time parallel compilation. No compilation was done.
After the trial execution ends and the graphs are extracted, ``neuron_parallel_compile`` would launch multiple compilation processes in parallel to compile all these graphs. Compiled graphs (NEFFs) are inserted into the Neuron Persistent Cache. You will also see outputs similar to the following about the compile cache path, the list of graphs (HLOs) to be compiled, and the running statistics of compiled graphs (count of remaining graphs, locked graphs, failed graphs, done compiled graphs).
.. code:: bash
INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
INFO ||NEURON_CACHE||: Current remaining items are 5, locked are 0, failed are 0, done are 0, total is 5
INFO ||NEURON_PARALLEL_COMPILE||: master grab hlos to compile: ['/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_8068656800389078395+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_17109392703413819652+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_9219523464496887986+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_16969875447143373016+abb26765/model.hlo.pb', '/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_3000743782456078279+abb26765/model.hlo.pb']
...
INFO ||NEURON_CACHE||: Current remaining items are 0, locked are 0, failed are 0, done are 5, total is 5
After all compilations are completed, a compilation summary is shown:
.. code:: bash
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: {
INFO: "compilation_summary": {
INFO: "true": 2
INFO: },
INFO: "compilation_report": {
INFO: "/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_1970132581169579119+abb26765/model.hlo.pb": {
INFO: "status": true,
INFO: "retry": 0
INFO: },
INFO: "/var/tmp/neuron-compile-cache/neuronxcc-2.0.0.22266a0+a69f71e55/MODULE_16141953836240613513+abb26765/model.hlo.pb": {
INFO: "status": true,
INFO: "retry": 0
INFO: }
INFO: }
INFO: }
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: Total graphs: 2
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: Total successful compilations: 2
INFO: 2023-08-24 20:21:11.000895: 161136 INFO ||NEURON_PARALLEL_COMPILE||: Total failed compilations: 0
Now if you run your script (without ``neuron_parallel_compile``), it will be faster
since the compiled graphs are already cached.
``torchrun --nproc_per_node=2 <train script>``
``Note``: Except for the option to limit number of run steps (such as ``--steps_this_run``),
the other options of ``<run commands>`` must match between the pre-compilation and
actual run. If this is not the case, you may see additional compilations during training
run because of new graphs getting generated, resulting in cache miss.
There may be additional compilations due to unreached execution paths (in case the
execution path is not reached in the first few steps of graph extraction), or changes
in parameters such as number of data parallel workers.
Each precompilation command or actual script execution command above can be prefixed with ``NEURON_COMPILE_CACHE_URL=<cache URL>`` or ``NEURON_CC_FLAGS="--cache_dir=<cache URL>"`` to specify a different cache location than the default (with ``--cache_dir`` taking precedence over ``NEURON_COMPILE_CACHE_URL`` if both are specified). Alternatively, the cache URL can also be specify in Python code using:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + "--cache_dir=<cache URL>"
You need to specify the same cache URL for both the precompilation command (using ``neuron_parallel_compile``) and the actual script execution command if you want the previously compiled and cached graphs to be used for actual script execution.
The environment variables below are available to help modify ``neuron_parallel_compile`` behavior:
``NEURON_PARALLEL_COMPILE_MAX_RETRIES`` :
- Set the maximum number of retries when using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If set to N, the tool will try compilation N more time(s) if the first graph compilation
failed. Example: Set NEURON_PARALLEL_COMPILE_MAX_RETRIES=1 when precompiling on
trn1.2xlarge where there's limited host memory and CPU resources.
Default is 0.
``NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE`` :
- When using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>` , if you want to ignore the error in training script
and compile the accumulated HLO graphs, you can do so by setting this environment variable.
Example: If NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE=1 is set when using ``neuron_parallel_compile``,
a crash in the training script would be ignored and the graphs collected upto the crash would be
compiled.
``NEURON_COMPILE_CACHE_URL``:
- Set the :ref:`Neuron Persistent Cache <neuron-caching>` URL or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If starts with ``s3://``, it will use AWS S3 as cache backend. Otherwise it will use
local disk cache. Default is ``/var/tmp/neuron-compile-cache``.
If this is specified together with ``cache_dir=<cache_url>`` option via ``NEURON_CC_FLAGS``, the ``--cache_dir`` option takes precedence.
Debugging with Neuron Persistent Cache
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A graph compilation can fail because of a compilation error or an environment issue (for example, compilation is interrupted by ctrl-C). The graph would be marked as failed and subsequent rerun would encounter message like below:
.. code:: bash
INFO ||NCC_WRAPPER||: Got a cached failed neff at /var/tmp/neuron-compile-cache/neuronxcc-2.8.0.25+a3ad0f342/MODULE_12486829708343293975+d41d8cd9/model.neff. Will skip compilation, please set --retry_failed_compilation for recompilation.
To retry compilation,
add ``--retry_failed_compilation`` in ``NEURON_CC_FLAGS`` environment variable. This will retry the compilation even if the graph was previously marked as failed compilation.
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --retry_failed_compilation'
See :ref:`Neuron Persistent Cache <neuron-caching>` for more information.
Separate collection and compilation commands
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For cases like finetuning, there could be multiple independent training tasks running on different nodes
and sharing many compilation graphs in common. ``neuron_parallel_compile`` provides commands to separate
the graph collection and compilation phases, so users can collect all graphs across different training sessions in advance to avoid duplicate compilations.
To only collect the graphs from trial executions of training scripts into Neuron Persistent Cache:
.. code:: bash
neuron_parallel_compile --command collect <run_script>
To compile the graph previously collected using ``collect`` command and store compiled result (NEFFs) back into Neuron Persistent Cache (make sure to use the same neuronx-cc compiler version as during the graph collection step):
.. code:: bash
``neuron_parallel_compile --command compile <run_script>``
Note: if ``--command`` is not specified, ``neuron_parallel_compile`` will do both collection and compilation phases by default.
Cache maintenance commands
~~~~~~~~~~~~~~~~~~~~~~~~~~
The following commands are available to help maintain the cache.
.. warning::
Make sure no running process is using the cache when you use ``clean`` or ``clear-locks`` command because it can cause cache errors.
To clean cached files:
.. code:: bash
# WARNING: Make sure no running process is using the cache
neuron_parallel_compile --command clean
To clear file locks left behind when a ``neuron_parallel_compile`` execution was interrupted:
.. code:: bash
# WARNING: Make sure no running process is using the cache
neuron_parallel_compile --command clear-locks
Each command above can be prefixed with ``NEURON_COMPILE_CACHE_URL=<cache URL>`` or ``NEURON_CC_FLAGS="--cache_dir=<cache URL>"`` to specify a different cache location than the default.
.. note::
Currently there's no automatic maintenance of cache size either on disk or in S3. Please delete files (i.e. older compiler versions) as necessary to keep cache size within your limit.
Analyze operations support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The analyze command checks the support of operations within the training script by checking each operator against neuronx-cc.
It is only supported for PyTorch models. The output of the tool will be available as result.json within the output location.
.. code:: bash
neuron_parallel_compile --command analyze python3 training_script.py
Optional Arguments:
``--analyze-output ANALYZE_OUTPUT_LOCATION``
Only supported for --command analyze. Path to location where output will be persisted.
Default: cwd/model_analysis_result
``--analyze-verbosity {1,2}``
Only supported for --command analyze. Level of information to be included within the output.
1: add XLA operator information into the results.
2: add aten metadata into results.
Default: 2
The tutorial for ``analyze`` can be found :ref:`here <torch-analyze-for-training-tutorial>`
</pre></body></html>
|
2023-09-29T20:54:48.457Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.rst.txt
|
```
.. _pytorch-neuronx-envvars:
PyTorch Neuron Environment Variables (``torch-neuronx``)
========================================================
Environment variables allow modifications to PyTorch Neuron behavior
without requiring code change to user script. It is recommended to set
them in code or just before invoking the python process, such as
``NEURON_FRAMEWORK_DEBUG=1 python3 <script>`` to avoid inadvertently
changing behavior for other scripts. Environment variables specific to
PyTorch Neuron are (experimental ones are noted):
``NEURON_CC_FLAGS``:
- Compiler options. Full compiler options are described in the :ref:`mixed-precision-casting-options`.
Additional options for the Neuron
Persistent Cache can be found in the :ref:`Neuron Persistent Cache <neuron-caching>` guide.
``NEURON_FRAMEWORK_DEBUG`` **[Experimental]**:
- Enable dumping of XLA graphs in both HLO format (intermediate representation) and text form for debugging.
``NEURON_EXTRACT_GRAPHS_ONLY`` **[Experimental]**:
- Dump the XLA graphs in HLO format (intermediate representation) and execute empty stubs with zero outputs
in order to allow multiple XLA graphs to be traced through a trial execution.
Used automatically for ahead-of-time
graph extraction for parallel compilation in :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`
tool. This environment variable can be checked in the training script
to prevent checking of bad outputs during trial run.
``NEURON_NUM_RECENT_MODELS_TO_KEEP`` **[Experimental]**:
- Keep only N number of graphs loaded in Neuron runtime for each
process, where N is the value this environment variable is set to.
Default is to keep all graphs loaded by a process.
``NEURON_COMPILE_CACHE_URL``
- Set the :ref:`Neuron Persistent Cache <neuron-caching>` URL or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If starts with ``s3://``, it will use AWS S3 as cache backend. Otherwise it will use
local disk cache. Default is ``/var/tmp/neuron-compile-cache``.
If this is specified together with ``cache_dir=<cache_url>`` option via ``NEURON_CC_FLAGS``, the ``--cache_dir`` option takes precedence.
``NEURON_PARALLEL_COMPILE_MAX_RETRIES`` **[Experimental]**:
- Set the maximum number of retries when using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If set to N, the tool will try compilation N more time(s) if the first graph compilation failed.
Example: Set NEURON_PARALLEL_COMPILE_MAX_RETRIES=1 when precompiling on
trn1.2xlarge where there's limited host memory and CPU resources.
Default is 0.
``NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE`` **[Experimental]**:
- When using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>` , if you want to ignore the error in training script
and compile the accumulated HLO graphs, you can do so by setting this environment variable.
Example: If NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE=1 is set when using ``neuron_parallel_compile``,
a crash in the training script would be ignored and the graphs collected upto the crash would be
compiled.
``NEURON_DUMP_HLO_SNAPSHOT`` **[Experimental]**:
- Dump the inputs, outputs, and graph in HLO format of a graph execution in a snapshot file. This
variable can be set to ``1``, ``ON_NRT_ERROR``, ``ON_NRT_ERROR_CPU``, ``ON_NRT_ERROR_HYBRID`` to
dump snapshots at every iteration using CPU memory, or dump only on errors automatically using
device, host, and both device and host memory respectively.
``NEURON_NC0_ONLY_SNAPSHOT`` **[Experimental]**:
- Dump only the snapshot associated with Neuron Core 0 when ``NEURON_NC0_ONLY_SNAPSHOT=1`` and
the ``NEURON_DUMP_HLO_SNAPSHOT`` flag is set.
``NEURON_FUSE_SOFTMAX`` **[Experimental]**:
- Enable custom lowering for Softmax operation to enable compiler optimizations.
``NEURON_TRANSFER_ALL_PARAMETERS_WITH_STATIC_RING`` **[Experimental]**:
- When set to 1, mark all parameter transfers as static to enable runtime optimizations for torch.nn modules that are wrapped as done in Megatron-LM. This setting is not needed if torch.nn modules are not wrapped.
``BUCKET_CAP_MB`` **[PyTorch XLA]**:
- If there are many parameters, such as in BERT training, small allreduce sizes can limit performance. To improve performance, you can try increasing the bucket size using ``BUCKET_CAP_MB`` environment variable, which is set to 50MB by default. For example, BERT pretraining on multiple instances can see improved performance with ``BUCKET_CAP_MB=512``.
``XLA_USE_BF16`` **[PyTorch XLA]**:
- When ``XLA_USE_BF16=1``, PyTorch Neuron will automatically map both torch.float and torch.double tensors
to bfloat16 tensors and turn on Stochastic Rounding mode. This can both reduce memory footprint and improve performance.
Example: to enable bfloat16 autocasting and stochastic rounding, set XLA_USE_BF16=1 only, as
stochastic rounding mode is on by default when XLA_USE_BF16=1. If you would like to preserve some tensors in float32, see ``XLA_DOWNCAST_BF16`` below.
``XLA_DOWNCAST_BF16`` **[PyTorch XLA]**:
- When ``XLA_DOWNCAST_BF16=1``, PyTorch Neuron will automatically map torch.float tensors to bfloat16 tensors, torch.double tensors
to float32 tensors and turn on Stochastic Rounding mode. This can both reduce memory footprint and improve performance, while preserving some tensors in float32.
Example: to enable float to bfloat16 and double to float autocasting and stochastic rounding, set XLA_DOWNCAST_BF16=1 only, as
stochastic rounding mode is on by default when XLA_DOWNCAST_BF16=1. If you want to cast both torch.float and torch.double to bfloat16, please see ``XLA_USE_BF16`` above.
``NEURON_RT_STOCHASTIC_ROUNDING_EN`` **[Neuron Runtime]**:
- When ``NEURON_RT_STOCHASTIC_ROUNDING_EN=1``, PyTorch Neuron will use stochastic rounding instead of
round-nearest-even for all internal rounding operations when casting from FP32 to a reduced precision data type (FP16, BF16, FP8, TF32).
This feature has been shown to improve
training convergence for reduced precision training jobs, such as when bfloat16 autocasting is
enabled. This is set to 1 by default by PyTorch Neuron when XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1. To switch to round-nearest-even mode, please set ``NEURON_RT_STOCHASTIC_ROUNDING_EN=0``.
``NEURON_RT_STOCHASTIC_ROUNDING_SEED`` **[Neuron Runtime]**:
- Sets the seed for the
random number generator used in stochastic rounding (see previous section). If this environment variable is not set, the seed is set to 0 by default. Please set ``NEURON_RT_STOCHASTIC_ROUNDING_SEED`` to a fixed value to ensure reproducibility between runs.
``NEURON_RT_VISIBLE_CORES`` **[Neuron Runtime]**:
Integer range of specific NeuronCores needed by the process (for example, 0-3 specifies NeuronCores 0, 1, 2, and 3).
You this environment variable when using torchrun to limit the launched processs to specific consecutive NeuronCores. To ensure best performance, the multi-core jobs requiring N NeuronCores for collective communication must be placed at the NeuronCore ID that starts at a multiple of N, where N is the world size limited to 1, 2, 8, 32. For example, a process using 2 NeuronCores can be mapped to 2 free NeuronCores starting at NeuronCore id 0, 2, 4, 6, etc, and a process using 8 NeuronCores can be mapped to 8 free NeuronCores starting at NeuronCore id 0, 8, 16, 24.
Additional Neuron runtime environment variables are described in `runtime
configuration
documentation <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-runtime/nrt-configurable-parameters.html>`__.
Additional XLA runtime environment variables are described in `PyTorch-XLA troubleshooting guide
<https://github.com/pytorch/xla/blob/v1.10.0/TROUBLESHOOTING.md#user-content-environment-variables>`__.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuronx-envvars:
PyTorch Neuron Environment Variables (``torch-neuronx``)
========================================================
Environment variables allow modifications to PyTorch Neuron behavior
without requiring code change to user script. It is recommended to set
them in code or just before invoking the python process, such as
``NEURON_FRAMEWORK_DEBUG=1 python3 <script>`` to avoid inadvertently
changing behavior for other scripts. Environment variables specific to
PyTorch Neuron are (experimental ones are noted):
``NEURON_CC_FLAGS``:
- Compiler options. Full compiler options are described in the :ref:`mixed-precision-casting-options`.
Additional options for the Neuron
Persistent Cache can be found in the :ref:`Neuron Persistent Cache <neuron-caching>` guide.
``NEURON_FRAMEWORK_DEBUG`` **[Experimental]**:
- Enable dumping of XLA graphs in both HLO format (intermediate representation) and text form for debugging.
``NEURON_EXTRACT_GRAPHS_ONLY`` **[Experimental]**:
- Dump the XLA graphs in HLO format (intermediate representation) and execute empty stubs with zero outputs
in order to allow multiple XLA graphs to be traced through a trial execution.
Used automatically for ahead-of-time
graph extraction for parallel compilation in :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`
tool. This environment variable can be checked in the training script
to prevent checking of bad outputs during trial run.
``NEURON_NUM_RECENT_MODELS_TO_KEEP`` **[Experimental]**:
- Keep only N number of graphs loaded in Neuron runtime for each
process, where N is the value this environment variable is set to.
Default is to keep all graphs loaded by a process.
``NEURON_COMPILE_CACHE_URL``
- Set the :ref:`Neuron Persistent Cache <neuron-caching>` URL or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If starts with ``s3://``, it will use AWS S3 as cache backend. Otherwise it will use
local disk cache. Default is ``/var/tmp/neuron-compile-cache``.
If this is specified together with ``cache_dir=<cache_url>`` option via ``NEURON_CC_FLAGS``, the ``--cache_dir`` option takes precedence.
``NEURON_PARALLEL_COMPILE_MAX_RETRIES`` **[Experimental]**:
- Set the maximum number of retries when using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>`.
If set to N, the tool will try compilation N more time(s) if the first graph compilation failed.
Example: Set NEURON_PARALLEL_COMPILE_MAX_RETRIES=1 when precompiling on
trn1.2xlarge where there's limited host memory and CPU resources.
Default is 0.
``NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE`` **[Experimental]**:
- When using :ref:`Neuron Persistent Cache <neuron-caching>` or :ref:`neuron_parallel_compile <pytorch-neuronx-parallel-compile-cli>` , if you want to ignore the error in training script
and compile the accumulated HLO graphs, you can do so by setting this environment variable.
Example: If NEURON_IGNORE_TRAINING_SCRIPT_ERROR_AND_COMPILE=1 is set when using ``neuron_parallel_compile``,
a crash in the training script would be ignored and the graphs collected upto the crash would be
compiled.
``NEURON_DUMP_HLO_SNAPSHOT`` **[Experimental]**:
- Dump the inputs, outputs, and graph in HLO format of a graph execution in a snapshot file. This
variable can be set to ``1``, ``ON_NRT_ERROR``, ``ON_NRT_ERROR_CPU``, ``ON_NRT_ERROR_HYBRID`` to
dump snapshots at every iteration using CPU memory, or dump only on errors automatically using
device, host, and both device and host memory respectively.
``NEURON_NC0_ONLY_SNAPSHOT`` **[Experimental]**:
- Dump only the snapshot associated with Neuron Core 0 when ``NEURON_NC0_ONLY_SNAPSHOT=1`` and
the ``NEURON_DUMP_HLO_SNAPSHOT`` flag is set.
``NEURON_FUSE_SOFTMAX`` **[Experimental]**:
- Enable custom lowering for Softmax operation to enable compiler optimizations.
``NEURON_TRANSFER_ALL_PARAMETERS_WITH_STATIC_RING`` **[Experimental]**:
- When set to 1, mark all parameter transfers as static to enable runtime optimizations for torch.nn modules that are wrapped as done in Megatron-LM. This setting is not needed if torch.nn modules are not wrapped.
``BUCKET_CAP_MB`` **[PyTorch XLA]**:
- If there are many parameters, such as in BERT training, small allreduce sizes can limit performance. To improve performance, you can try increasing the bucket size using ``BUCKET_CAP_MB`` environment variable, which is set to 50MB by default. For example, BERT pretraining on multiple instances can see improved performance with ``BUCKET_CAP_MB=512``.
``XLA_USE_BF16`` **[PyTorch XLA]**:
- When ``XLA_USE_BF16=1``, PyTorch Neuron will automatically map both torch.float and torch.double tensors
to bfloat16 tensors and turn on Stochastic Rounding mode. This can both reduce memory footprint and improve performance.
Example: to enable bfloat16 autocasting and stochastic rounding, set XLA_USE_BF16=1 only, as
stochastic rounding mode is on by default when XLA_USE_BF16=1. If you would like to preserve some tensors in float32, see ``XLA_DOWNCAST_BF16`` below.
``XLA_DOWNCAST_BF16`` **[PyTorch XLA]**:
- When ``XLA_DOWNCAST_BF16=1``, PyTorch Neuron will automatically map torch.float tensors to bfloat16 tensors, torch.double tensors
to float32 tensors and turn on Stochastic Rounding mode. This can both reduce memory footprint and improve performance, while preserving some tensors in float32.
Example: to enable float to bfloat16 and double to float autocasting and stochastic rounding, set XLA_DOWNCAST_BF16=1 only, as
stochastic rounding mode is on by default when XLA_DOWNCAST_BF16=1. If you want to cast both torch.float and torch.double to bfloat16, please see ``XLA_USE_BF16`` above.
``NEURON_RT_STOCHASTIC_ROUNDING_EN`` **[Neuron Runtime]**:
- When ``NEURON_RT_STOCHASTIC_ROUNDING_EN=1``, PyTorch Neuron will use stochastic rounding instead of
round-nearest-even for all internal rounding operations when casting from FP32 to a reduced precision data type (FP16, BF16, FP8, TF32).
This feature has been shown to improve
training convergence for reduced precision training jobs, such as when bfloat16 autocasting is
enabled. This is set to 1 by default by PyTorch Neuron when XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1. To switch to round-nearest-even mode, please set ``NEURON_RT_STOCHASTIC_ROUNDING_EN=0``.
``NEURON_RT_STOCHASTIC_ROUNDING_SEED`` **[Neuron Runtime]**:
- Sets the seed for the
random number generator used in stochastic rounding (see previous section). If this environment variable is not set, the seed is set to 0 by default. Please set ``NEURON_RT_STOCHASTIC_ROUNDING_SEED`` to a fixed value to ensure reproducibility between runs.
``NEURON_RT_VISIBLE_CORES`` **[Neuron Runtime]**:
Integer range of specific NeuronCores needed by the process (for example, 0-3 specifies NeuronCores 0, 1, 2, and 3).
You this environment variable when using torchrun to limit the launched processs to specific consecutive NeuronCores. To ensure best performance, the multi-core jobs requiring N NeuronCores for collective communication must be placed at the NeuronCore ID that starts at a multiple of N, where N is the world size limited to 1, 2, 8, 32. For example, a process using 2 NeuronCores can be mapped to 2 free NeuronCores starting at NeuronCore id 0, 2, 4, 6, etc, and a process using 8 NeuronCores can be mapped to 8 free NeuronCores starting at NeuronCore id 0, 8, 16, 24.
Additional Neuron runtime environment variables are described in `runtime
configuration
documentation <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-runtime/nrt-configurable-parameters.html>`__.
Additional XLA runtime environment variables are described in `PyTorch-XLA troubleshooting guide
<https://github.com/pytorch/xla/blob/v1.10.0/TROUBLESHOOTING.md#user-content-environment-variables>`__.
</pre></body></html>
|
2023-09-29T20:54:48.532Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/arch/neuron-features/neuron-caching.rst.txt
|
```
.. _neuron-caching:
Neuron Persistent Cache
=======================
PyTorch Neuron (``torch-neuronx``) uses ``torch-xla``, and ``torch-xla`` operates in lazy mode. In other words, every operation in training script
is recorded in a graph. The graph is executed only when the results are requested by
the user when they use ``print`` or ``xm.mark_step``. Requesting results tells
``torch-xla`` that the recorded graph needs to be executed.
Before executing the graph on a Neuron device, ``torch-xla`` would call Neuron Compiler (``neuronx-cc``) to compile the graph into Neuron specific
graph. Then the graph is executed on the :ref:`NeuronCore/s <neuroncores-arch>`. Compiling the graph involves
running optimizations that can make use of the :ref:`NeuronCore/s <neuroncores-arch>` efficiently. Running these
optimizations can be expensive and can result in long compile times. To save the
users from compiling these graphs at every iteration, ``torch-xla`` maintains an
in-memory cache called Just in Time (JIT) cache. When the user re-runs the same graph (eg. 2nd
iteration of the training run), torch-xla would check in this JIT cache and re-use
the cached compilation result, thereby avoiding the wait times.
Since the JIT cache is an in-memory cache, it needs to be constructed every time the training script is
run. Hence, if the user re-runs the training script, a new JIT cache is created. This causes a compilation for the first training graph.
To avoid such compilations across training runs, PyTorch Neuron (``torch-neuronx``) has built an on-disk
``Neuron Persistent Cache``. Since this cache is on-disk, its persistent across training runs. So
now, when a graph is compiled for the fist time, the compilation result is saved in
``Neuron Persistent Cache``. When the user re-runs the training script, since the JIT cache is not
ready, it would send the graph for compilation. PyTorch Neuron (``torch-neuronx``) would then check if
the compiled result is present in the ``Neuron Persistent Cache``, if yes, it would return with the
compiled result. This on-disk cache thereby avoids compilations across training runs.
This cache is enabled by default for Neuron's PyTorch/XLA flow (training) as well as
transformers-neuronx LLM inference package.
The default cache path is the directory ``/var/tmp/neuron-compile-cache``.
Look at the diagram below on the end to end flow:
|Image:|
As seen from the diagram, the operations are recorded in a graph in lazy mode and only
when a mark_step is hit, the graph is executed. Before execution, the graph passes through
two caches to check if we have compiled the graph sometime in the past. If yes, we reuse
the compilation result and execute with it. This avoid duplicate compilations.
One thing to note, both JIT cache and Neuron Cache are complementary to each other.
JIT cache prevents duplicate compilation within a run and Neuron Cache prevents duplicate
compilations across training runs. For example, within a training script, we have a training
loop that iterates through the dataset. The first iteration would trace a unique graph
and the following iteration would trace a graph that is similar to the first one. In this case,
the subsequent iterations would hit the JIT cache and reuse the result. However, to save
users from compiling for the first iteration graph, ``Neuron Persistent Cache`` would be used. In this case,
the very first time when the script is run, the ``Neuron Persistent Cache`` would be updated. Going forward
when we re-run the training script, compilation results from ``Neuron Persistent Cache`` would be used.
To better understand how ``Neuron Persistent Cache`` works, consider the example below:
.. code:: python
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
t1 = torch.randn(3, 3).to(device)
t2 = t1 / 0.5
x = t2.cpu()
Running the above example produces the following logs:
.. code:: bash
2023-08-25 21:51:36.000433: INFO ||NCC_WRAPPER||: Compile cache path: /var/tmp/neuron-compile-cache
.
Compiler status PASS
Re-running the above script would fetch the graph from the
neuron cache and you would see logs as follows:
.. code:: bash
2023-08-25 21:52:23.000451: INFO ||NCC_WRAPPER||: Compile cache path: /var/tmp/neuron-compile-cache
2023-08-25 21:52:23.000453: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.8.0.25+a3ad0f342/MODULE_198775565831884870+d41d8cd9/model.neff. Exiting with a successfully compiled graph.
As you can see, the next run picks the compiled graph from
cache, thereby saving the compilation time.
The cache uses hash of the Neuron compiler flags and XLA graph as the
key. If the Neuron compiler version or XLA graph changes, you will see
recompilation. Examples of changes that would cause XLA graph change
include:
- Model type and size
- Batch size
- Optimizer and optimizer hyperparameters
- Location of xm.mark_step()
To keep cache size small and to enable weights/parameters updates without recompilation,
only the compute graphs are cached when using transformers-neuronx (weights/parameters are inputs to the compute graphs) and
training flow using torch-neuronx's XLA (weights/parameters are inputs and outputs of the compute graphs).
Note that this caching mechanism doesn't apply to the torch-neuronx trace API where the weights/parameters are frozen and converted to constants,
then compiled together with the compute operations (traced graphs with frozen weights/parameters are not cached).
All compilation results are saved in the cache. To disable the cache, you
can pass ``--no_cache`` option via NEURON_CC_FLAGS:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --no_cache'
The default cache path is the directory ``/var/tmp/neuron-compile-cache``.
To change the cache's location, pass ``cache_dir=<cache_url>``
option via ``NEURON_CC_FLAGS`` or ``NEURON_COMPILE_CACHE_URL=<cache_url>`` environment variables:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --cache_dir=<cache URL>'
.. code:: python
os.environ['NEURON_COMPILE_CACHE_URL'] = '<cache_URL>'
The cache URL specified using ``--cache_dir`` is prioritized over that specified using ``NEURON_COMPILE_CACHE_URL`` if both are set.
If ``<cache_url>`` starts with ``s3://``, it will use the AWS S3 URL as the cache location, provided that the corresponding S3 bucket exists and is both readable and writeable.
You can change the verbose level of the compiler by adding ``log_level`` to either ``WARNING``, ``INFO``
or ``ERROR``. This can be done as follows:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --log_level=INFO'
A graph compilation can fail because of a compilation error or an environment issue (for example, compilation is interrupted by ctrl-C). The graph would be marked as failed and subsequent rerun would encounter message like below:
.. code:: bash
INFO ||NCC_WRAPPER||: Got a cached failed neff at /var/tmp/neuron-compile-cache/neuronxcc-2.8.0.25+a3ad0f342/MODULE_12486829708343293975+d41d8cd9/model.neff. Will skip compilation, please set --retry_failed_compilation for recompilation.
To retry compilation,
add ``--retry_failed_compilation`` in ``NEURON_CC_FLAGS`` environment variable. When the script is reran, all the previously failed compilations are recompiled and fresh results are saved in the cache.
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --retry_failed_compilation'
Note that all flags demonstrated above will be parsed by a tool called ``neuron_cc_wrapper``, which is a wrapper over Neuron Compiler CLI to provide caching mechanism. All these flags will not be passed into Neuron Compiler CLI.
.. |Image:| image:: ./images/NeuronCaching.png
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-caching:
Neuron Persistent Cache
=======================
PyTorch Neuron (``torch-neuronx``) uses ``torch-xla``, and ``torch-xla`` operates in lazy mode. In other words, every operation in training script
is recorded in a graph. The graph is executed only when the results are requested by
the user when they use ``print`` or ``xm.mark_step``. Requesting results tells
``torch-xla`` that the recorded graph needs to be executed.
Before executing the graph on a Neuron device, ``torch-xla`` would call Neuron Compiler (``neuronx-cc``) to compile the graph into Neuron specific
graph. Then the graph is executed on the :ref:`NeuronCore/s <neuroncores-arch>`. Compiling the graph involves
running optimizations that can make use of the :ref:`NeuronCore/s <neuroncores-arch>` efficiently. Running these
optimizations can be expensive and can result in long compile times. To save the
users from compiling these graphs at every iteration, ``torch-xla`` maintains an
in-memory cache called Just in Time (JIT) cache. When the user re-runs the same graph (eg. 2nd
iteration of the training run), torch-xla would check in this JIT cache and re-use
the cached compilation result, thereby avoiding the wait times.
Since the JIT cache is an in-memory cache, it needs to be constructed every time the training script is
run. Hence, if the user re-runs the training script, a new JIT cache is created. This causes a compilation for the first training graph.
To avoid such compilations across training runs, PyTorch Neuron (``torch-neuronx``) has built an on-disk
``Neuron Persistent Cache``. Since this cache is on-disk, its persistent across training runs. So
now, when a graph is compiled for the fist time, the compilation result is saved in
``Neuron Persistent Cache``. When the user re-runs the training script, since the JIT cache is not
ready, it would send the graph for compilation. PyTorch Neuron (``torch-neuronx``) would then check if
the compiled result is present in the ``Neuron Persistent Cache``, if yes, it would return with the
compiled result. This on-disk cache thereby avoids compilations across training runs.
This cache is enabled by default for Neuron's PyTorch/XLA flow (training) as well as
transformers-neuronx LLM inference package.
The default cache path is the directory ``/var/tmp/neuron-compile-cache``.
Look at the diagram below on the end to end flow:
|Image:|
As seen from the diagram, the operations are recorded in a graph in lazy mode and only
when a mark_step is hit, the graph is executed. Before execution, the graph passes through
two caches to check if we have compiled the graph sometime in the past. If yes, we reuse
the compilation result and execute with it. This avoid duplicate compilations.
One thing to note, both JIT cache and Neuron Cache are complementary to each other.
JIT cache prevents duplicate compilation within a run and Neuron Cache prevents duplicate
compilations across training runs. For example, within a training script, we have a training
loop that iterates through the dataset. The first iteration would trace a unique graph
and the following iteration would trace a graph that is similar to the first one. In this case,
the subsequent iterations would hit the JIT cache and reuse the result. However, to save
users from compiling for the first iteration graph, ``Neuron Persistent Cache`` would be used. In this case,
the very first time when the script is run, the ``Neuron Persistent Cache`` would be updated. Going forward
when we re-run the training script, compilation results from ``Neuron Persistent Cache`` would be used.
To better understand how ``Neuron Persistent Cache`` works, consider the example below:
.. code:: python
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
t1 = torch.randn(3, 3).to(device)
t2 = t1 / 0.5
x = t2.cpu()
Running the above example produces the following logs:
.. code:: bash
2023-08-25 21:51:36.000433: INFO ||NCC_WRAPPER||: Compile cache path: /var/tmp/neuron-compile-cache
.
Compiler status PASS
Re-running the above script would fetch the graph from the
neuron cache and you would see logs as follows:
.. code:: bash
2023-08-25 21:52:23.000451: INFO ||NCC_WRAPPER||: Compile cache path: /var/tmp/neuron-compile-cache
2023-08-25 21:52:23.000453: INFO ||NCC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.8.0.25+a3ad0f342/MODULE_198775565831884870+d41d8cd9/model.neff. Exiting with a successfully compiled graph.
As you can see, the next run picks the compiled graph from
cache, thereby saving the compilation time.
The cache uses hash of the Neuron compiler flags and XLA graph as the
key. If the Neuron compiler version or XLA graph changes, you will see
recompilation. Examples of changes that would cause XLA graph change
include:
- Model type and size
- Batch size
- Optimizer and optimizer hyperparameters
- Location of xm.mark_step()
To keep cache size small and to enable weights/parameters updates without recompilation,
only the compute graphs are cached when using transformers-neuronx (weights/parameters are inputs to the compute graphs) and
training flow using torch-neuronx's XLA (weights/parameters are inputs and outputs of the compute graphs).
Note that this caching mechanism doesn't apply to the torch-neuronx trace API where the weights/parameters are frozen and converted to constants,
then compiled together with the compute operations (traced graphs with frozen weights/parameters are not cached).
All compilation results are saved in the cache. To disable the cache, you
can pass ``--no_cache`` option via NEURON_CC_FLAGS:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --no_cache'
The default cache path is the directory ``/var/tmp/neuron-compile-cache``.
To change the cache's location, pass ``cache_dir=<cache_url>``
option via ``NEURON_CC_FLAGS`` or ``NEURON_COMPILE_CACHE_URL=<cache_url>`` environment variables:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --cache_dir=<cache URL>'
.. code:: python
os.environ['NEURON_COMPILE_CACHE_URL'] = '<cache_URL>'
The cache URL specified using ``--cache_dir`` is prioritized over that specified using ``NEURON_COMPILE_CACHE_URL`` if both are set.
If ``<cache_url>`` starts with ``s3://``, it will use the AWS S3 URL as the cache location, provided that the corresponding S3 bucket exists and is both readable and writeable.
You can change the verbose level of the compiler by adding ``log_level`` to either ``WARNING``, ``INFO``
or ``ERROR``. This can be done as follows:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --log_level=INFO'
A graph compilation can fail because of a compilation error or an environment issue (for example, compilation is interrupted by ctrl-C). The graph would be marked as failed and subsequent rerun would encounter message like below:
.. code:: bash
INFO ||NCC_WRAPPER||: Got a cached failed neff at /var/tmp/neuron-compile-cache/neuronxcc-2.8.0.25+a3ad0f342/MODULE_12486829708343293975+d41d8cd9/model.neff. Will skip compilation, please set --retry_failed_compilation for recompilation.
To retry compilation,
add ``--retry_failed_compilation`` in ``NEURON_CC_FLAGS`` environment variable. When the script is reran, all the previously failed compilations are recompiled and fresh results are saved in the cache.
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --retry_failed_compilation'
Note that all flags demonstrated above will be parsed by a tool called ``neuron_cc_wrapper``, which is a wrapper over Neuron Compiler CLI to provide caching mechanism. All these flags will not be passed into Neuron Compiler CLI.
.. |Image:| image:: ./images/NeuronCaching.png
</pre></body></html>
|
2023-09-29T20:54:48.614Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.rst.txt
|
```
.. _pytorch-neuronx-debug:
How to debug models in PyTorch Neuron (``torch-neuronx``)
=========================================================
.. contents:: Table of Contents
:local:
:depth: 2
Torch-XLA evaluates operations lazily, which means it builds a symbolic
graph in the background and the graph is executed in hardware only when
the users request (print) for the output or a mark_step is encountered.
To effectively debug training scripts with torch-xla, please use one of
the approaches mentioned below:
**Printing Metrics**
~~~~~~~~~~~~~~~~~~~~
Torch-xla provides a utility that records metrics of different sections
of the code. These metrics can help figure out things like: How much
time is spent in compilation? How much time is spent in execution? To
check the metrics:
1. Import metrics: ``import torch_xla.debug.metrics as met``
2. Print metrics at the end of the step: ``print(met.metrics_report())``
Printing metrics should produce an output that looks like this:
.. code:: bash
Metric: CompileTime
TotalSamples: 1
Accumulator: 09s969ms486.408us
Percentiles: 1%=09s969ms486.408us; 5%=09s969ms486.408us; 10%=09s969ms486.408us; 20%=09s969ms486.408us; 50%=09s969ms486.408us; 80%=09s969ms486.408us; 90%=09s969ms486.408us; 95%=09s969ms486.408us; 99%=09s969ms486.408us
.....
Metric: ExecuteTime
TotalSamples: 1
Accumulator: 186ms062.970us
Percentiles: 1%=186ms062.970us; 5%=186ms062.970us; 10%=186ms062.970us; 20%=186ms062.970us; 50%=186ms062.970us; 80%=186ms062.970us; 90%=186ms062.970us; 95%=186ms062.970us; 99%=186ms062.970us
....
Metric: TensorsGraphSize
TotalSamples: 1
Accumulator: 9.00
Percentiles: 1%=9.00; 5%=9.00; 10%=9.00; 20%=9.00; 50%=9.00; 80%=9.00; 90%=9.00; 95%=9.00; 99%=9.00
Metric: TransferFromServerTime
TotalSamples: 2
Accumulator: 010ms130.597us
ValueRate: 549ms937.108us / second
Rate: 108.372 / second
Percentiles: 1%=004ms948.602us; 5%=004ms948.602us; 10%=004ms948.602us; 20%=004ms948.602us; 50%=006ms181.995us; 80%=006ms181.995us; 90%=006ms181.995us; 95%=006ms181.995us; 99%=006ms181.995us
Metric: TransferToServerTime
TotalSamples: 6
Accumulator: 061ms698.791us
ValueRate: 007ms731.182us / second
Rate: 0.665369 / second
Percentiles: 1%=006ms848.579us; 5%=006ms848.579us; 10%=006ms848.579us; 20%=007ms129.666us; 50%=008ms940.718us; 80%=008ms496.166us; 90%=024ms636.413us; 95%=024ms636.413us; 99%=024ms636.413us
Metric: TransferToServerTransformTime
TotalSamples: 6
Accumulator: 011ms835.717us
ValueRate: 001ms200.844us / second
Rate: 0.664936 / second
Percentiles: 1%=108.403us; 5%=108.403us; 10%=108.403us; 20%=115.676us; 50%=167.399us; 80%=516.659us; 90%=010ms790.400us; 95%=010ms790.400us; 99%=010ms790.400us
.....
Counter: xla::_copy_from
Value: 7
Counter: xla::addmm
Value: 2
Counter: xla::empty
Value: 5
Counter: xla::t
Value: 2
....
Metric: XrtCompile
TotalSamples: 1
Accumulator: 09s946ms607.609us
Mean: 09s946ms607.609us
StdDev: 000.000us
Percentiles: 25%=09s946ms607.609us; 50%=09s946ms607.609us; 80%=09s946ms607.609us; 90%=09s946ms607.609us; 95%=09s946ms607.609us; 99%=09s946ms607.609us
Metric: XrtExecute
TotalSamples: 1
Accumulator: 176ms932.067us
Mean: 176ms932.067us
StdDev: 000.000us
Percentiles: 25%=176ms932.067us; 50%=176ms932.067us; 80%=176ms932.067us; 90%=176ms932.067us; 95%=176ms932.067us; 99%=176ms932.067us
Metric: XrtReadLiteral
TotalSamples: 2
Accumulator: 608.578us
Mean: 304.289us
StdDev: 067.464us
Rate: 106.899 / second
Percentiles: 25%=236.825us; 50%=371.753us; 80%=371.753us; 90%=371.753us; 95%=371.753us; 99%=371.753us
As seen, you can get useful information about graph compile
times/execution times. You can also know which operators are present in
the graph, which operators are run on the CPU and which operators are run on an XLA device.
For example, operators that have a prefix ``aten::`` would run on the CPU, since they do not have
xla lowering. All operators with prefix ``xla::`` would run on an XLA device. Note: aten operators
that do not have xla lowering would result in a graph fragmentation and might end up slowing down the
entire execution. If you encounter such operators, create a request for operator support.
**Printing Tensors**
~~~~~~~~~~~~~~~~~~~~
Users can print tensors in their script as below:
.. code:: python
import os
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
print(output2)
Since torch-xla evaluates operations lazily, when you try to print
``output2`` , the graph associated with the tensor would be evaluated.
When a graph is evaluated, it is first compiled for the device and executed on
the selected device. Note: Each tensor would have a graph associated
with it and can result in graph compilations and executions. For
example, in the above script, if you try to print ``output1`` , a new
graph is cut and you would see another evaluation. To avoid multiple evaluations, you can make use of ``mark_step`` (next section).
**Use mark_step**
~~~~~~~~~~~~~~~~~
Torch-XLA provides an api called ``mark_step`` which evaluates a graph
collected upto that point. While this is similar to printing of an output tensor
wherein a graph is also evaluated, there is a difference. When
an output tensor is printed, only the graph associated with that specific tensor is
evaluated, whereas mark_step enables all the output tensors up to ``mark_step`` call to be evaluated
in a single graph. Hence, any tensor print after ``mark_step`` would be
effectively free of cost as the tensor values are already evaluated.
Consider the example below:
.. code:: python
import os
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
xm.mark_step()
print(output2)
print(output1)
# Printing the metrics to check if compilation and execution occurred
print(met.metrics_report())
In the printed metrics, the number of compiles and
executions is only 1, even though 2 tensors are printed.
Hence, to avoid multiple graph evaluations, it is recommended that you
visualize tensors after a ``mark_step`` . You can also make use of the
`add_step_closure <https://pytorch.org/xla/release/1.9/index.html#torch_xla.core.xla_model.add_step_closure>`__
api for this purpose. With this api, you pass in the tensors that needs to
be visualized/printed. The added tensors would then be preserved in the
graph and can be printed as part of the callback function passed to the
api. Here is a sample usage:
https://github.com/pytorch/xla/blob/master/test/test_train_mp_mnist.py#L133
**Note:** Graph compilations can take time as the compiler optimizes the graph to run on device.
**Using Eager Debug Mode**
~~~~~~~~~~~~~~~~~~~~~~~~~~
Eager debug mode provides a convenient utility to step through the code and evaluate operators one by one for correctness. Eager debug mode is useful to inspect your models the way you would do in eager-mode frameworks like PyTorch and Tensorflow. With Eager Debug Mode operations are executed eagerly. As soon as an operation is registered with torch-xla, it would be sent for compilation and
execution. Since compiling a single operation, the time spent
would be minimal. Moreover, the chances of hitting the framework compilation cache
increases as models would have repeated operations throughout.
Consider example 1 below:
.. code:: python
# Example 1
import os
# You need to set this env variable before importing torch-xla
# to run in eager debug mode.
os.environ["NEURON_USE_EAGER_DEBUG_MODE"] = "1"
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
# Printing the metrics to check if compilation and execution occurred
# Here, in the metrics you should notice that the XRTCompile and XRTExecute
# value is non-zero, even though no tensor is printed. This is because, each
# operation is executed eagerly.
print(met.metrics_report())
print(output2)
print(output1)
# Printing the metrics to check if compilation and execution occurred.
# Here the XRTCompile count should be same as the previous count.
# In other words, printing tensors did not incur any extra compile
# and execution of the graph
print(met.metrics_report())
As seen from the above scripts, each operator is evaluated eagerly and
there is no extra compilation when output tensors are printed. Moreover, together with
the on-disk Neuron persistent cache, eager debug mode only incurs one time
compilation cost when the ops is first run. When the script is run again, the compiled ops will be
pulled from the persistent cache. Any changes you make to the
training script would result in the re-compilation of only the newly
inserted operations. This is because each operation is compiled
independently. Consider example 2 below:
.. code:: python
# Example 2
import os
# You need to set this env variable before importing torch-xla
# to run in eager debug mode.
os.environ["NEURON_USE_EAGER_DEBUG_MODE"] = "1"
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
os.environ['NEURON_CC_FLAGS'] = "--log_level=INFO"
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
linear3 = torch.nn.Linear(20,30).to(device)
linear4 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
output3 = linear3(output2)
# Note the number of compiles at this point and compare
# with the compiles in the next metrics print
print(met.metrics_report())
output4 = linear4(output3)
print(met.metrics_report())
Running the above example 2 script after running example 1 script, you may notice that from the start until the statement ``output2 = linear2(output1)`` ,
all the graphs would hit the persistent cache. Executing the line
``output3 = linear3(output2)`` would result in a new compilation for ``linear3`` layer only because the layer configuration is new.
Now, when we run
``output4 = linear4(output3)`` , you would observe no new compilation
happens. This is because the graph for ``linear4`` is same as the graph for
``linear2`` and hence the compiled graph for ``linear2`` is reused for ``linear4`` by the framework's internal cache.
Eager debug mode avoids the wait times involved with tensor printing because of larger graph compilation.
It is designed only for debugging purposes, so when the training script is ready, please remove the ``NEURON_USE_EAGER_DEBUG_MODE`` environment
variable from the script in order to obtain optimal performance.
By default, in eager debug mode the
logging level in the Neuron compiler is set to error mode. Hence, no
logs would be generated unless there is an error. Before your first
print, if there are many operations that needs to be compiled, there
might be a small delay. In case you want to check the logs, switch on
the ``INFO`` logs for compiler using:
.. code:: python
os.environ["NEURON_CC_FLAGS"] = "--log_level=INFO"
**Profiling Model Run**
~~~~~~~~~~~~~~~~~~~~~~~
Profiling model run can help to identify different bottlenecks and
resolve issues faster. You can profile different sections of the code to
see which block is the slowest. To profile model run, you can follow the
steps below:
1. Add: ``import torch_xla.debug.profiler as xp``
2. Start server. This can be done by adding the following line after
creating xla device: ``server = xp.start_server(9012)``
3. In a separate terminal, start tensorboard. The logdir should be in
the same directory from which you run the script.
.. image:: /images/tensorboard.png
:alt: Image: tensorboard.png
Open the tensorboard on a browser. Go to profile section in the top
right. Note: you may have to install the profile plugin using:
``pip install tensorboard-plugin-profile``
4. When you click on the profile, it should give an option to capture
profile. Clicking on capture profile produces the following pop-up.
.. image:: /images/popup.png
:alt: Image: popup.png
In the URL enter: ``localhost:9012`` . Port in this URL should
be same as the one you gave when starting the server in the script.
5. Once done, click capture and it should automatically load the
following page:
.. image:: /images/./tb_1.png
:alt: Image: tb_1.png
6. To check the profile for different blocks of code, head to
``trace_viewer`` under ``Tools`` (on the left column).
.. image:: /images/./options.png
:alt: Image: options.png
7. It should show a profile that looks like this:
.. image:: /images/./profile_large.png
:alt: Image: profile_large.png
Note: By default, torch-xla would time different blocks of code inside
the library. However, you can also profile block of code in your
scripts. This can be done by adding the code within a ``xp.Trace``
context as follows:
.. code:: python
....
for epoch in range(total_epochs):
inputs = torch.randn(1,10).to(device)
labels = torch.tensor([1]).to(device)
with xp.Trace("model_build"):
loss = model(inputs, labels)
with xp.Trace("loss_backward"):
loss.backward()
....
It should produce a profile that has the ``model_build`` and
``loss_backward`` section timed. This way you can time any block of
script for debugging.
.. image:: /images/./profile_zoom.png
:alt: Image: Screen profile_zoom.png
Note: If you are running your training script in a docker container, to view the
tensorboard, you should launch the docker container using flag: ``--network host``
eg. ``docker run --network host my_image:my_tag``
.. _torch-neuronx-snapshotting:
**Snapshotting**
~~~~~~~~~~~~~~~~
Snapshotting models can be used to dump debug information that can then be sent
to the Neuron team. Neuron execution relies on a series of compiled graphs. Internally,
graph HLOs are used as an intermediate representation which is then compiled. Then, during
execution, the graph inputs are passed to the Neuron runtime, which produces
outputs using the compiled graph. Snapshotting saves the inputs to a graph
execution, executes the graphs, saves the outputs of the execution, and then
bundles and dumps the inputs, outputs and graph HLO in one file. This is
illustrated here:
.. image :: /images/./snapshot-diagram.png
:alt: Image: snapshot-diagram.png
This feature can be enabled using the following environment variables,
which can be set at the beginning of your script as follows.
.. code:: python
....
os.environ["XLA_FLAGS"] = " --xla_dump_to=dump"
os.environ["NEURON_FRAMEWORK_DEBUG"] = "1"
os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "1"
....
This set of environment variables will produce snapshots under the dump
folder with the extensions ``.hlo.snapshot.pb`` or ``.decomposed_hlo_snapshot``
at every iteration. For example a file that looks like the following would
be produced.
.. code:: bash
dump/module_SyncTensorsGraph.387.pid_12643.execution_7496.hlo_snapshot.pb
The dumping environment variable can be set and unset at specific
iterations as shown in the following example.
.. code:: python
....
for step in range(STEPS):
if step == 20:
os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "1"
else:
os.environ.pop('NEURON_DUMP_HLO_SNAPSHOT', None)
train_x = torch.randn(BATCH_SIZE, 28, 28)
train_x = train_x.to(device)
loss = model(train_x)
loss.backward()
optimizer.step()
xm.mark_step()
....
Additionally, we provide capabilities to snapshot graphs automatically.
The environment variables above can be set as follows:
.. code:: python
....
os.environ["XLA_FLAGS"] = " --xla_dump_to=dump"
os.environ["NEURON_FRAMEWORK_DEBUG"] = "1"
os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "ON_NRT_ERROR"
....
When unexpected errors such as a graph execution producing NaNs occurs,
snapshots will be automatically produced and execution will be terminated.
Occasionally, for larger models, automatic snapshotting may not capture
snapshots due to the device memory being exhausted. In this case, the above
flag can be set to
``os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "ON_NRT_ERROR_HYBRID"``, this
will allocate memory for inputs on both the device and host memory.
In some additional cases, this may still go out of memory and may need to be
set to ``os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "ON_NRT_ERROR_CPU"`` to
avoid allocating any memory on the device at all for automatic snapshotting.
**Snapshot FAQs:**
---------
**When should I use this features?**
This feature should be used when debugging errors that requires interfacing
with and providing debug data to the Neuron team. Snapshotting may be redundant
and unnecessary in some situations. For example, when only the model weights are
necessary for debugging, methods such as checkpointing may be more convenient to use.
**What sort of data is captured with these snapshots?**
The type of data captured by these snapshots may include model graphs in HLO form,
weights/parameters, optimizer states, intermediate tensors and gradients.
This data may be considered sensitive and this should be taken into account before
sending the data to the Neuron team.
**What is the size of these snapshots?**
The size of snapshots can be significant for larger models such as GPT or BERT
with several GBs worth of data for larger graphs, so it is recommended to check
that sufficient disk space exists before using snapshotting. In addition, limiting
the amount of snapshots taken in a run will help to preserve disk space.
**Will snapshotting add overhead to my execution?**
Snapshotting does add a small overhead to the execution in most cases. This
overhead can be significant if snapshots are dumped at every iteration. In
order to alleviate some of this overhead, in the case that snapshotting is
not necessary on all cores the following environment variable can be set to
collect snapshots only on the first core. In addition, checkpointing in tandem
with snapshotting can be useful to reduce overhead. A checkpoint close to
the problem iteration can be captured, then execution resumed with
snapshots enabled.
.. code:: python
....
os.environ["NEURON_NC0_ONLY_SNAPSHOT"] = "1"
....
**How can I share snapshots with the Neuron team?**
These snapshots can be shared with the Neuron team via S3 bucket.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuronx-debug:
How to debug models in PyTorch Neuron (``torch-neuronx``)
=========================================================
.. contents:: Table of Contents
:local:
:depth: 2
Torch-XLA evaluates operations lazily, which means it builds a symbolic
graph in the background and the graph is executed in hardware only when
the users request (print) for the output or a mark_step is encountered.
To effectively debug training scripts with torch-xla, please use one of
the approaches mentioned below:
**Printing Metrics**
~~~~~~~~~~~~~~~~~~~~
Torch-xla provides a utility that records metrics of different sections
of the code. These metrics can help figure out things like: How much
time is spent in compilation? How much time is spent in execution? To
check the metrics:
1. Import metrics: ``import torch_xla.debug.metrics as met``
2. Print metrics at the end of the step: ``print(met.metrics_report())``
Printing metrics should produce an output that looks like this:
.. code:: bash
Metric: CompileTime
TotalSamples: 1
Accumulator: 09s969ms486.408us
Percentiles: 1%=09s969ms486.408us; 5%=09s969ms486.408us; 10%=09s969ms486.408us; 20%=09s969ms486.408us; 50%=09s969ms486.408us; 80%=09s969ms486.408us; 90%=09s969ms486.408us; 95%=09s969ms486.408us; 99%=09s969ms486.408us
.....
Metric: ExecuteTime
TotalSamples: 1
Accumulator: 186ms062.970us
Percentiles: 1%=186ms062.970us; 5%=186ms062.970us; 10%=186ms062.970us; 20%=186ms062.970us; 50%=186ms062.970us; 80%=186ms062.970us; 90%=186ms062.970us; 95%=186ms062.970us; 99%=186ms062.970us
....
Metric: TensorsGraphSize
TotalSamples: 1
Accumulator: 9.00
Percentiles: 1%=9.00; 5%=9.00; 10%=9.00; 20%=9.00; 50%=9.00; 80%=9.00; 90%=9.00; 95%=9.00; 99%=9.00
Metric: TransferFromServerTime
TotalSamples: 2
Accumulator: 010ms130.597us
ValueRate: 549ms937.108us / second
Rate: 108.372 / second
Percentiles: 1%=004ms948.602us; 5%=004ms948.602us; 10%=004ms948.602us; 20%=004ms948.602us; 50%=006ms181.995us; 80%=006ms181.995us; 90%=006ms181.995us; 95%=006ms181.995us; 99%=006ms181.995us
Metric: TransferToServerTime
TotalSamples: 6
Accumulator: 061ms698.791us
ValueRate: 007ms731.182us / second
Rate: 0.665369 / second
Percentiles: 1%=006ms848.579us; 5%=006ms848.579us; 10%=006ms848.579us; 20%=007ms129.666us; 50%=008ms940.718us; 80%=008ms496.166us; 90%=024ms636.413us; 95%=024ms636.413us; 99%=024ms636.413us
Metric: TransferToServerTransformTime
TotalSamples: 6
Accumulator: 011ms835.717us
ValueRate: 001ms200.844us / second
Rate: 0.664936 / second
Percentiles: 1%=108.403us; 5%=108.403us; 10%=108.403us; 20%=115.676us; 50%=167.399us; 80%=516.659us; 90%=010ms790.400us; 95%=010ms790.400us; 99%=010ms790.400us
.....
Counter: xla::_copy_from
Value: 7
Counter: xla::addmm
Value: 2
Counter: xla::empty
Value: 5
Counter: xla::t
Value: 2
....
Metric: XrtCompile
TotalSamples: 1
Accumulator: 09s946ms607.609us
Mean: 09s946ms607.609us
StdDev: 000.000us
Percentiles: 25%=09s946ms607.609us; 50%=09s946ms607.609us; 80%=09s946ms607.609us; 90%=09s946ms607.609us; 95%=09s946ms607.609us; 99%=09s946ms607.609us
Metric: XrtExecute
TotalSamples: 1
Accumulator: 176ms932.067us
Mean: 176ms932.067us
StdDev: 000.000us
Percentiles: 25%=176ms932.067us; 50%=176ms932.067us; 80%=176ms932.067us; 90%=176ms932.067us; 95%=176ms932.067us; 99%=176ms932.067us
Metric: XrtReadLiteral
TotalSamples: 2
Accumulator: 608.578us
Mean: 304.289us
StdDev: 067.464us
Rate: 106.899 / second
Percentiles: 25%=236.825us; 50%=371.753us; 80%=371.753us; 90%=371.753us; 95%=371.753us; 99%=371.753us
As seen, you can get useful information about graph compile
times/execution times. You can also know which operators are present in
the graph, which operators are run on the CPU and which operators are run on an XLA device.
For example, operators that have a prefix ``aten::`` would run on the CPU, since they do not have
xla lowering. All operators with prefix ``xla::`` would run on an XLA device. Note: aten operators
that do not have xla lowering would result in a graph fragmentation and might end up slowing down the
entire execution. If you encounter such operators, create a request for operator support.
**Printing Tensors**
~~~~~~~~~~~~~~~~~~~~
Users can print tensors in their script as below:
.. code:: python
import os
import torch
import torch_xla
import torch_xla.core.xla_model as xm
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
print(output2)
Since torch-xla evaluates operations lazily, when you try to print
``output2`` , the graph associated with the tensor would be evaluated.
When a graph is evaluated, it is first compiled for the device and executed on
the selected device. Note: Each tensor would have a graph associated
with it and can result in graph compilations and executions. For
example, in the above script, if you try to print ``output1`` , a new
graph is cut and you would see another evaluation. To avoid multiple evaluations, you can make use of ``mark_step`` (next section).
**Use mark_step**
~~~~~~~~~~~~~~~~~
Torch-XLA provides an api called ``mark_step`` which evaluates a graph
collected upto that point. While this is similar to printing of an output tensor
wherein a graph is also evaluated, there is a difference. When
an output tensor is printed, only the graph associated with that specific tensor is
evaluated, whereas mark_step enables all the output tensors up to ``mark_step`` call to be evaluated
in a single graph. Hence, any tensor print after ``mark_step`` would be
effectively free of cost as the tensor values are already evaluated.
Consider the example below:
.. code:: python
import os
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
xm.mark_step()
print(output2)
print(output1)
# Printing the metrics to check if compilation and execution occurred
print(met.metrics_report())
In the printed metrics, the number of compiles and
executions is only 1, even though 2 tensors are printed.
Hence, to avoid multiple graph evaluations, it is recommended that you
visualize tensors after a ``mark_step`` . You can also make use of the
`add_step_closure <https://pytorch.org/xla/release/1.9/index.html#torch_xla.core.xla_model.add_step_closure>`__
api for this purpose. With this api, you pass in the tensors that needs to
be visualized/printed. The added tensors would then be preserved in the
graph and can be printed as part of the callback function passed to the
api. Here is a sample usage:
https://github.com/pytorch/xla/blob/master/test/test_train_mp_mnist.py#L133
**Note:** Graph compilations can take time as the compiler optimizes the graph to run on device.
**Using Eager Debug Mode**
~~~~~~~~~~~~~~~~~~~~~~~~~~
Eager debug mode provides a convenient utility to step through the code and evaluate operators one by one for correctness. Eager debug mode is useful to inspect your models the way you would do in eager-mode frameworks like PyTorch and Tensorflow. With Eager Debug Mode operations are executed eagerly. As soon as an operation is registered with torch-xla, it would be sent for compilation and
execution. Since compiling a single operation, the time spent
would be minimal. Moreover, the chances of hitting the framework compilation cache
increases as models would have repeated operations throughout.
Consider example 1 below:
.. code:: python
# Example 1
import os
# You need to set this env variable before importing torch-xla
# to run in eager debug mode.
os.environ["NEURON_USE_EAGER_DEBUG_MODE"] = "1"
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
# Printing the metrics to check if compilation and execution occurred
# Here, in the metrics you should notice that the XRTCompile and XRTExecute
# value is non-zero, even though no tensor is printed. This is because, each
# operation is executed eagerly.
print(met.metrics_report())
print(output2)
print(output1)
# Printing the metrics to check if compilation and execution occurred.
# Here the XRTCompile count should be same as the previous count.
# In other words, printing tensors did not incur any extra compile
# and execution of the graph
print(met.metrics_report())
As seen from the above scripts, each operator is evaluated eagerly and
there is no extra compilation when output tensors are printed. Moreover, together with
the on-disk Neuron persistent cache, eager debug mode only incurs one time
compilation cost when the ops is first run. When the script is run again, the compiled ops will be
pulled from the persistent cache. Any changes you make to the
training script would result in the re-compilation of only the newly
inserted operations. This is because each operation is compiled
independently. Consider example 2 below:
.. code:: python
# Example 2
import os
# You need to set this env variable before importing torch-xla
# to run in eager debug mode.
os.environ["NEURON_USE_EAGER_DEBUG_MODE"] = "1"
import torch
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.metrics as met
os.environ['NEURON_CC_FLAGS'] = "--log_level=INFO"
device = xm.xla_device()
input1 = torch.randn(2,10).to(device)
# Defining 2 linear layers
linear1 = torch.nn.Linear(10,30).to(device)
linear2 = torch.nn.Linear(30,20).to(device)
linear3 = torch.nn.Linear(20,30).to(device)
linear4 = torch.nn.Linear(30,20).to(device)
# Running forward
output1 = linear1(input1)
output2 = linear2(output1)
output3 = linear3(output2)
# Note the number of compiles at this point and compare
# with the compiles in the next metrics print
print(met.metrics_report())
output4 = linear4(output3)
print(met.metrics_report())
Running the above example 2 script after running example 1 script, you may notice that from the start until the statement ``output2 = linear2(output1)`` ,
all the graphs would hit the persistent cache. Executing the line
``output3 = linear3(output2)`` would result in a new compilation for ``linear3`` layer only because the layer configuration is new.
Now, when we run
``output4 = linear4(output3)`` , you would observe no new compilation
happens. This is because the graph for ``linear4`` is same as the graph for
``linear2`` and hence the compiled graph for ``linear2`` is reused for ``linear4`` by the framework's internal cache.
Eager debug mode avoids the wait times involved with tensor printing because of larger graph compilation.
It is designed only for debugging purposes, so when the training script is ready, please remove the ``NEURON_USE_EAGER_DEBUG_MODE`` environment
variable from the script in order to obtain optimal performance.
By default, in eager debug mode the
logging level in the Neuron compiler is set to error mode. Hence, no
logs would be generated unless there is an error. Before your first
print, if there are many operations that needs to be compiled, there
might be a small delay. In case you want to check the logs, switch on
the ``INFO`` logs for compiler using:
.. code:: python
os.environ["NEURON_CC_FLAGS"] = "--log_level=INFO"
**Profiling Model Run**
~~~~~~~~~~~~~~~~~~~~~~~
Profiling model run can help to identify different bottlenecks and
resolve issues faster. You can profile different sections of the code to
see which block is the slowest. To profile model run, you can follow the
steps below:
1. Add: ``import torch_xla.debug.profiler as xp``
2. Start server. This can be done by adding the following line after
creating xla device: ``server = xp.start_server(9012)``
3. In a separate terminal, start tensorboard. The logdir should be in
the same directory from which you run the script.
.. image:: /images/tensorboard.png
:alt: Image: tensorboard.png
Open the tensorboard on a browser. Go to profile section in the top
right. Note: you may have to install the profile plugin using:
``pip install tensorboard-plugin-profile``
4. When you click on the profile, it should give an option to capture
profile. Clicking on capture profile produces the following pop-up.
.. image:: /images/popup.png
:alt: Image: popup.png
In the URL enter: ``localhost:9012`` . Port in this URL should
be same as the one you gave when starting the server in the script.
5. Once done, click capture and it should automatically load the
following page:
.. image:: /images/./tb_1.png
:alt: Image: tb_1.png
6. To check the profile for different blocks of code, head to
``trace_viewer`` under ``Tools`` (on the left column).
.. image:: /images/./options.png
:alt: Image: options.png
7. It should show a profile that looks like this:
.. image:: /images/./profile_large.png
:alt: Image: profile_large.png
Note: By default, torch-xla would time different blocks of code inside
the library. However, you can also profile block of code in your
scripts. This can be done by adding the code within a ``xp.Trace``
context as follows:
.. code:: python
....
for epoch in range(total_epochs):
inputs = torch.randn(1,10).to(device)
labels = torch.tensor([1]).to(device)
with xp.Trace("model_build"):
loss = model(inputs, labels)
with xp.Trace("loss_backward"):
loss.backward()
....
It should produce a profile that has the ``model_build`` and
``loss_backward`` section timed. This way you can time any block of
script for debugging.
.. image:: /images/./profile_zoom.png
:alt: Image: Screen profile_zoom.png
Note: If you are running your training script in a docker container, to view the
tensorboard, you should launch the docker container using flag: ``--network host``
eg. ``docker run --network host my_image:my_tag``
.. _torch-neuronx-snapshotting:
**Snapshotting**
~~~~~~~~~~~~~~~~
Snapshotting models can be used to dump debug information that can then be sent
to the Neuron team. Neuron execution relies on a series of compiled graphs. Internally,
graph HLOs are used as an intermediate representation which is then compiled. Then, during
execution, the graph inputs are passed to the Neuron runtime, which produces
outputs using the compiled graph. Snapshotting saves the inputs to a graph
execution, executes the graphs, saves the outputs of the execution, and then
bundles and dumps the inputs, outputs and graph HLO in one file. This is
illustrated here:
.. image :: /images/./snapshot-diagram.png
:alt: Image: snapshot-diagram.png
This feature can be enabled using the following environment variables,
which can be set at the beginning of your script as follows.
.. code:: python
....
os.environ["XLA_FLAGS"] = " --xla_dump_to=dump"
os.environ["NEURON_FRAMEWORK_DEBUG"] = "1"
os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "1"
....
This set of environment variables will produce snapshots under the dump
folder with the extensions ``.hlo.snapshot.pb`` or ``.decomposed_hlo_snapshot``
at every iteration. For example a file that looks like the following would
be produced.
.. code:: bash
dump/module_SyncTensorsGraph.387.pid_12643.execution_7496.hlo_snapshot.pb
The dumping environment variable can be set and unset at specific
iterations as shown in the following example.
.. code:: python
....
for step in range(STEPS):
if step == 20:
os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "1"
else:
os.environ.pop('NEURON_DUMP_HLO_SNAPSHOT', None)
train_x = torch.randn(BATCH_SIZE, 28, 28)
train_x = train_x.to(device)
loss = model(train_x)
loss.backward()
optimizer.step()
xm.mark_step()
....
Additionally, we provide capabilities to snapshot graphs automatically.
The environment variables above can be set as follows:
.. code:: python
....
os.environ["XLA_FLAGS"] = " --xla_dump_to=dump"
os.environ["NEURON_FRAMEWORK_DEBUG"] = "1"
os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "ON_NRT_ERROR"
....
When unexpected errors such as a graph execution producing NaNs occurs,
snapshots will be automatically produced and execution will be terminated.
Occasionally, for larger models, automatic snapshotting may not capture
snapshots due to the device memory being exhausted. In this case, the above
flag can be set to
``os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "ON_NRT_ERROR_HYBRID"``, this
will allocate memory for inputs on both the device and host memory.
In some additional cases, this may still go out of memory and may need to be
set to ``os.environ["NEURON_DUMP_HLO_SNAPSHOT"] = "ON_NRT_ERROR_CPU"`` to
avoid allocating any memory on the device at all for automatic snapshotting.
**Snapshot FAQs:**
---------
**When should I use this features?**
This feature should be used when debugging errors that requires interfacing
with and providing debug data to the Neuron team. Snapshotting may be redundant
and unnecessary in some situations. For example, when only the model weights are
necessary for debugging, methods such as checkpointing may be more convenient to use.
**What sort of data is captured with these snapshots?**
The type of data captured by these snapshots may include model graphs in HLO form,
weights/parameters, optimizer states, intermediate tensors and gradients.
This data may be considered sensitive and this should be taken into account before
sending the data to the Neuron team.
**What is the size of these snapshots?**
The size of snapshots can be significant for larger models such as GPT or BERT
with several GBs worth of data for larger graphs, so it is recommended to check
that sufficient disk space exists before using snapshotting. In addition, limiting
the amount of snapshots taken in a run will help to preserve disk space.
**Will snapshotting add overhead to my execution?**
Snapshotting does add a small overhead to the execution in most cases. This
overhead can be significant if snapshots are dumped at every iteration. In
order to alleviate some of this overhead, in the case that snapshotting is
not necessary on all cores the following environment variable can be set to
collect snapshots only on the first core. In addition, checkpointing in tandem
with snapshotting can be useful to reduce overhead. A checkpoint close to
the problem iteration can be captured, then execution resumed with
snapshots enabled.
.. code:: python
....
os.environ["NEURON_NC0_ONLY_SNAPSHOT"] = "1"
....
**How can I share snapshots with the Neuron team?**
These snapshots can be shared with the Neuron team via S3 bucket.
</pre></body></html>
|
2023-09-29T20:54:48.648Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/programming-guide/training/index.rst.txt
|
```
Developer Guide (``torch-neuronx``)
====================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide
/frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug
/frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
.. include:: /frameworks/torch/torch-neuronx/programming-guide/training/index.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Developer Guide (``torch-neuronx``)
====================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide
/frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug
/frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
.. include:: /frameworks/torch/torch-neuronx/programming-guide/training/index.txt
</pre></body></html>
|
2023-09-29T20:54:48.661Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.rst.txt
|
```
.. _pytorch-neuronx-programming-guide:
Developer Guide for Training with PyTorch Neuron (``torch-neuronx``)
=====================================================================
.. contents:: Table of Contents
:local:
:depth: 2
Trainium is designed to speed up model training and reduce training cost. It is available on the Trn1 instances. Each Trainium accelerator has two NeuronCores, which are the main neural network compute units.
PyTorch Neuron enables PyTorch users to train their models on Trainium's
NeuronCores with little code change to their training code. It is based
on the `PyTorch/XLA software package <https://pytorch.org/xla>`__.
This guide helps you get started with single-worker training and
distributed training using PyTorch Neuron.
PyTorch Neuron
--------------
Neuron XLA device
~~~~~~~~~~~~~~~~~
With PyTorch Neuron the default XLA device is mapped to a NeuronCore. By default, one NeuronCore is configured. To use Neuron XLA device, specify
the device as ``xm.xla_device()`` or ``'xla'``:
.. code:: python
import torch_xla.core.xla_model as xm
device = xm.xla_device()
or
.. code:: python
device = 'xla'
PyTorch models and tensors can be mapped to the device as usual:
.. code:: python
model.to(device)
tensor.to(device)
To move tensor back to CPU, do :
.. code:: python
tensor.cpu()
or
.. code:: python
tensor.to('cpu')
PyTorch Neuron single-worker training/evaluation quick-start
--------------------------------------------------------------
PyTorch Neuron uses XLA to enable conversion of
PyTorch operations to Trainium instructions. To get started on PyTorch
Neuron, first modify your :ref:`training script <neuronx-mlp-training-tutorial>` to
use XLA in the same manner as described in `PyTorch/XLA
documentation <https://pytorch.org/xla>`__ and
use XLA device:
.. code:: python
import torch_xla.core.xla_model as xm
device = xm.xla_device()
# or
device = 'xla'
The NeuronCore is mapped to an XLA device. On Trainium instance, the XLA device is automatically mapped to the first available NeuronCore.
By default the above steps will enable the training or evaluation script to run on one
NeuronCore. NOTE: Each process is mapped to one NeuronCore.
Finally, add ``mark_step`` at the end of the training or evaluation step to compile
and execute the training or evaluation step:
.. code:: python
xm.mark_step()
These changes can be placed in control-flows in order to keep the script
the same between PyTorch Neuron and CPU/GPU. For example, you can use an
environment variable to disable XLA which would cause the script to run
in PyTorch native mode (using CPU on Trainium instances and GPU on GPU
instances):
.. code:: python
device = 'cpu'
if not os.environ.get("DISABLE_XLA", None):
device = 'xla'
...
# end of training step
if not os.environ.get("DISABLE_XLA", None):
xm.mark_step()
More on the need for mark_step is at `Understand the lazy mode in
PyTorch Neuron <#understand-the-lazy-mode-in-pytorch-neuron>`__.
For a full runnable example, please see the :ref:`Single-worker MLP training
on Trainium tutorial
<neuronx-mlp-training-tutorial:single-worker-mlp-training-on-trainium>`.
PyTorch Neuron multi-worker data parallel training using torchrun
-----------------------------------------------------------------
Data parallel training allows you to replicate your script across
multiple workers, each worker processing a proportional portion of the
dataset, in order to train faster.
To run multiple workers in data parallel configuration, with each worker
using one NeuronCore, first add additional imports for parallel
dataloader and multi-processing utilities:
::
import torch_xla.distributed.parallel_loader as pl
Next we initialize the Neuron distributed context using the XLA backend for torch.distributed:
::
import torch_xla.distributed.xla_backend
torch.distributed.init_process_group('xla')
Next, replace ``optimizer.step()`` function call with
``xm.optimizer_step(optimizer)`` which adds gradient synchronization
across workers before taking the optimizer step:
::
xm.optimizer_step(optimizer)
If you're using a distributed dataloader, wrap your dataloader in the
PyTorch/XLA's ``MpDeviceLoader`` class which provides buffering
to hide CPU to device data load latency:
::
parallel_loader = pl.MpDeviceLoader(dataloader, device)
Within the training code, use xm.xrt_world_size() to get the world size,
and xm.get_ordinal to get the global rank of the current process.
Then run use `PyTorch
torchrun <https://pytorch.org/docs/stable/elastic/run.html#launcher-api>`__
utility to run the script. For example, to run 32 worker data parallel
training:
``torchrun --nproc_per_node=32 <script and options>``
To run on multiple instances, make sure to use trn1.32xlarge instances
and use all 32 NeuronCores on each instance. For example, with two instances,
on the rank-0 Trn1 host, run with --node_rank=0 using torchrun utility:
.. code:: shell
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=0 --master_addr=<root IP> --master_port=<root port> <script and options>
On another Trn1 host, run with --node_rank=1 :
.. code:: shell
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=1 --master_addr=<root IP> --master_port=<root port> <script and options>
It is important to launch rank-0 worker with --node_rank=0 to avoid hang.
More information about torchrun can be found PyTorch documentation at
https://pytorch.org/docs/stable/elastic/run.html#launcher-api .
See the :ref:`Multi-worker data-parallel MLP training using torchrun
tutorial <neuronx-mlp-training-tutorial:multi-worker-data-parallel-mlp-training-using-torchrun>`
for a full example.
Conversion from Distributed Data Parallel (DDP) application
-----------------------------------------------------------
Distributed Data Parallel (DDP) in torch.distributed module is a wrapper
to help convert a single-worker training to distributed training. To
convert from torch.distributed Distributed Data Parallel (DDP)
application to PyTorch Neuron, first convert the application back to
single-worker training, which simply involves removing the DDP wrapper,
for example ``model = DDP(model, device_ids=[rank])``. After this,
follow the previous section to change to multi-worker training.
PyTorch Neuron environment variables
--------------------------------------
Environment variables allow modifications to PyTorch Neuron behavior
without requiring code change to user script. See :ref:`PyTorch Neuron environment variables <pytorch-neuronx-envvars>` for more details.
Neuron Persistent Cache for compiled graphs
-------------------------------------------
See :ref:`Neuron Persistent Cache for compiled graphs <neuron-caching>`
Number of graphs
-----------------
PyTorch/XLA converts PyTorch's eager mode execution to lazy-mode
graph-based execution. During this process, there can be multiple graphs
compiled and executed if there are extra mark-steps or functions with
implicit mark-steps. Additionally, more graphs can be generated if there
are different execution paths taken due to control-flows.
Automatic casting of float tensors to BFloat16
----------------------------------------------
With PyTorch Neuron, the default behavior is for torch.float (FP32) and torch.double (FP64) tensors
to be mapped to torch.float in hardware. To reduce memory footprint and improve performance,
torch.float and torch.double tensors can automatically be converted to BFloat16 by setting
the environment variable ``XLA_USE_BF16=1``. Alternatively, torch.float can automatically be converted
to BFloat16 and torch.double converted to FP32 by setting the environment variable ``XLA_DOWNCAST_BF16=1``.
Automatic Mixed-Precision
-------------------------
BF16 mixed-precision using PyTorch Autocast
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, the compiler automatically cast internal FP32 operations to
BF16. You can disable this and allow PyTorch's BF16 mixed-precision to
do the casting. PyTorch's BF16 mixed-precision is achieved by casting
certain operations to operate BF16. We currently use CUDA's list of
operations that can operate in BF16:
(NOTE: Although convolution is in the list below, it is currently unsupported by Neuron. See :ref:`model-architecture-fit`)
.. code:: bash
_convolution
_convolution
_convolution_nogroup
conv1d
conv2d
conv3d
conv_tbc
conv_transpose1d
conv_transpose2d
conv_transpose3d
convolution
cudnn_convolution
cudnn_convolution_transpose
cudnn_convolution
cudnn_convolution_transpose
cudnn_convolution
cudnn_convolution_transpose
prelu
addmm
addmv
addr
matmul
mm
mv
linear
addbmm
baddbmm
bmm
chain_matmul
linalg_multi_dot
To enable PyTorch's BF16 mixed-precision, first turn off the Neuron
compiler auto-cast:
.. code:: python
os.environ["NEURON_CC_FLAGS"] = "--auto-cast=none"
Next, overwrite torch.cuda.is_bf16_supported to return True:
.. code:: python
torch.cuda.is_bf16_supported = lambda: True
Next, per recommendation from official PyTorch documentation, place only
the forward-pass of the training step in the torch.autocast scope:
.. code:: python
with torch.autocast(dtype=torch.bfloat16, device_type='cuda'):
# forward pass
The device type is CUDA because we are using CUDA's list of BF16
compatible operations as mentioned above.
Example showing the original training code snippet:
.. code:: python
def train_loop_fn(train_loader):
for i, data in enumerate(train_loader):
inputs = data[0]
labels = data[3]
outputs = model(inputs, labels=labels)
loss = outputs.loss/ flags.grad_acc_steps
loss.backward()
optimizer.step()
xm.mark_step()
The following shows the training loop modified to use BF16 autocast:
.. code:: python
os.environ["NEURON_CC_FLAGS"] = "--auto-cast=none"
def train_loop_fn(train_loader):
for i, data in enumerate(train_loader):
torch.cuda.is_bf16_supported = lambda: True
with torch.autocast(dtype=torch.bfloat16, device_type='cuda'):
inputs = data[0]
labels = data[3]
outputs = model(inputs, labels=labels)
loss = outputs.loss/ flags.grad_acc_steps
loss.backward()
optimizer.step()
xm.mark_step()
For a full example of BF16 mixed-precision, see :ref:`PyTorch Neuron BERT Pretraining Tutorial <hf-bert-pretraining-tutorial>`.
See official PyTorch documentation for more details about
`torch.autocast <https://pytorch.org/docs/stable/amp.html#autocasting>`__
.
Tips and Best Practices
-----------------------
Understand the lazy mode in PyTorch Neuron
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One significant difference between PyTorch Neuron and native PyTorch is
that the PyTorch Neuron system runs in lazy mode while the native
PyTorch runs in eager mode. Tensors in lazy mode are placeholders for
building the computational graph until they are materialized after the
compilation and evaluation are complete. The PyTorch Neuron system
builds the computational graph on the fly when you call PyTorch APIs to
build the computation using tensors and operators. The computational
graph gets compiled and executed when ``xm.mark_step()`` is called
explicitly or implicitly by ``pl.MpDeviceLoader/pl.ParallelLoader``, or
when you explicitly request the value of a tensor such as by calling
``loss.item()`` or ``print(loss)``.
.. _minimize-the-number-of-compilation-and-executions:
Minimize the number of compilation-and-executions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For best performance, you should keep in mind the possible ways to
initiate compilation-and-executions as described in `Understand the lazy
mode in PyTorch/XLA <#understand-the-lazy-mode-in-pytorch-neuron>`__ and
should try to minimize the number of compilation-and-executions.
Ideally, only one compilation-and-execution is necessary per training
iteration and is initiated automatically by
``pl.MpDeviceLoader/pl.ParallelLoader``. The ``MpDeviceLoader`` is
optimized for XLA and should always be used if possible for best
performance. During training, you might want to examine some
intermediate results such as loss values. In such case, the printing of
lazy tensors should be wrapped using ``xm.add_step_closure()`` to avoid
unnecessary compilation-and-executions.
Ensure common initial weights across workers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To achieve best accuracy during data parallel training, all workers need
to have the same initial parameter states. This can be achieved by using
the same seed across the workers. In the case of HuggingFace library,
the set_seed function can be used.
(https://github.com/pytorch/xla/issues/3216).
Use PyTorch/XLA's model save function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To avoid problems with saving and loading checkpoints, make sure you use
PyTorch/XLA's model save function to properly checkpoint your model. For
more information about the function, see
`torch_xla.core.xla_model.save <https://pytorch.org/xla/release/1.9/index.html#torch_xla.core.xla_model.save>`__
in the *PyTorch on XLA Devices* documentation.
When training using multiple devices, ``xla_model.save`` can result in high host memory usage. If you see such high usage
causing the host to run out of memory, please use `torch_xla.utils.serialization.save <https://pytorch.org/xla/release/1.9/index.html#torch_xla.utils.serialization.save>`__ .
This would save the model in a serialized manner. When saved using the ``serialization.save`` api, the model should
be loaded using ``serialization.load`` api. More information on this here: `Saving and Loading Tensors <https://pytorch.org/xla/release/1.9/index.html#saving-and-loading-xla-tensors>`__
FAQ
---
What is the difference between Trainium and Inferentia?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Trainium is an accelerator designed to speed up training, whereas
Inferentia is an accelerator designed to speed up inference.
Debugging and troubleshooting
-----------------------------
To debug on PyTorch Neuron, please follow the :ref:`debug
guide <./pytorch-neuron-debug.html>`.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuronx-programming-guide:
Developer Guide for Training with PyTorch Neuron (``torch-neuronx``)
=====================================================================
.. contents:: Table of Contents
:local:
:depth: 2
Trainium is designed to speed up model training and reduce training cost. It is available on the Trn1 instances. Each Trainium accelerator has two NeuronCores, which are the main neural network compute units.
PyTorch Neuron enables PyTorch users to train their models on Trainium's
NeuronCores with little code change to their training code. It is based
on the `PyTorch/XLA software package <https://pytorch.org/xla>`__.
This guide helps you get started with single-worker training and
distributed training using PyTorch Neuron.
PyTorch Neuron
--------------
Neuron XLA device
~~~~~~~~~~~~~~~~~
With PyTorch Neuron the default XLA device is mapped to a NeuronCore. By default, one NeuronCore is configured. To use Neuron XLA device, specify
the device as ``xm.xla_device()`` or ``'xla'``:
.. code:: python
import torch_xla.core.xla_model as xm
device = xm.xla_device()
or
.. code:: python
device = 'xla'
PyTorch models and tensors can be mapped to the device as usual:
.. code:: python
model.to(device)
tensor.to(device)
To move tensor back to CPU, do :
.. code:: python
tensor.cpu()
or
.. code:: python
tensor.to('cpu')
PyTorch Neuron single-worker training/evaluation quick-start
--------------------------------------------------------------
PyTorch Neuron uses XLA to enable conversion of
PyTorch operations to Trainium instructions. To get started on PyTorch
Neuron, first modify your :ref:`training script <neuronx-mlp-training-tutorial>` to
use XLA in the same manner as described in `PyTorch/XLA
documentation <https://pytorch.org/xla>`__ and
use XLA device:
.. code:: python
import torch_xla.core.xla_model as xm
device = xm.xla_device()
# or
device = 'xla'
The NeuronCore is mapped to an XLA device. On Trainium instance, the XLA device is automatically mapped to the first available NeuronCore.
By default the above steps will enable the training or evaluation script to run on one
NeuronCore. NOTE: Each process is mapped to one NeuronCore.
Finally, add ``mark_step`` at the end of the training or evaluation step to compile
and execute the training or evaluation step:
.. code:: python
xm.mark_step()
These changes can be placed in control-flows in order to keep the script
the same between PyTorch Neuron and CPU/GPU. For example, you can use an
environment variable to disable XLA which would cause the script to run
in PyTorch native mode (using CPU on Trainium instances and GPU on GPU
instances):
.. code:: python
device = 'cpu'
if not os.environ.get("DISABLE_XLA", None):
device = 'xla'
...
# end of training step
if not os.environ.get("DISABLE_XLA", None):
xm.mark_step()
More on the need for mark_step is at `Understand the lazy mode in
PyTorch Neuron <#understand-the-lazy-mode-in-pytorch-neuron>`__.
For a full runnable example, please see the :ref:`Single-worker MLP training
on Trainium tutorial
<neuronx-mlp-training-tutorial:single-worker-mlp-training-on-trainium>`.
PyTorch Neuron multi-worker data parallel training using torchrun
-----------------------------------------------------------------
Data parallel training allows you to replicate your script across
multiple workers, each worker processing a proportional portion of the
dataset, in order to train faster.
To run multiple workers in data parallel configuration, with each worker
using one NeuronCore, first add additional imports for parallel
dataloader and multi-processing utilities:
::
import torch_xla.distributed.parallel_loader as pl
Next we initialize the Neuron distributed context using the XLA backend for torch.distributed:
::
import torch_xla.distributed.xla_backend
torch.distributed.init_process_group('xla')
Next, replace ``optimizer.step()`` function call with
``xm.optimizer_step(optimizer)`` which adds gradient synchronization
across workers before taking the optimizer step:
::
xm.optimizer_step(optimizer)
If you're using a distributed dataloader, wrap your dataloader in the
PyTorch/XLA's ``MpDeviceLoader`` class which provides buffering
to hide CPU to device data load latency:
::
parallel_loader = pl.MpDeviceLoader(dataloader, device)
Within the training code, use xm.xrt_world_size() to get the world size,
and xm.get_ordinal to get the global rank of the current process.
Then run use `PyTorch
torchrun <https://pytorch.org/docs/stable/elastic/run.html#launcher-api>`__
utility to run the script. For example, to run 32 worker data parallel
training:
``torchrun --nproc_per_node=32 <script and options>``
To run on multiple instances, make sure to use trn1.32xlarge instances
and use all 32 NeuronCores on each instance. For example, with two instances,
on the rank-0 Trn1 host, run with --node_rank=0 using torchrun utility:
.. code:: shell
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=0 --master_addr=<root IP> --master_port=<root port> <script and options>
On another Trn1 host, run with --node_rank=1 :
.. code:: shell
torchrun --nproc_per_node=32 --nnodes=2 --node_rank=1 --master_addr=<root IP> --master_port=<root port> <script and options>
It is important to launch rank-0 worker with --node_rank=0 to avoid hang.
More information about torchrun can be found PyTorch documentation at
https://pytorch.org/docs/stable/elastic/run.html#launcher-api .
See the :ref:`Multi-worker data-parallel MLP training using torchrun
tutorial <neuronx-mlp-training-tutorial:multi-worker-data-parallel-mlp-training-using-torchrun>`
for a full example.
Conversion from Distributed Data Parallel (DDP) application
-----------------------------------------------------------
Distributed Data Parallel (DDP) in torch.distributed module is a wrapper
to help convert a single-worker training to distributed training. To
convert from torch.distributed Distributed Data Parallel (DDP)
application to PyTorch Neuron, first convert the application back to
single-worker training, which simply involves removing the DDP wrapper,
for example ``model = DDP(model, device_ids=[rank])``. After this,
follow the previous section to change to multi-worker training.
PyTorch Neuron environment variables
--------------------------------------
Environment variables allow modifications to PyTorch Neuron behavior
without requiring code change to user script. See :ref:`PyTorch Neuron environment variables <pytorch-neuronx-envvars>` for more details.
Neuron Persistent Cache for compiled graphs
-------------------------------------------
See :ref:`Neuron Persistent Cache for compiled graphs <neuron-caching>`
Number of graphs
-----------------
PyTorch/XLA converts PyTorch's eager mode execution to lazy-mode
graph-based execution. During this process, there can be multiple graphs
compiled and executed if there are extra mark-steps or functions with
implicit mark-steps. Additionally, more graphs can be generated if there
are different execution paths taken due to control-flows.
Automatic casting of float tensors to BFloat16
----------------------------------------------
With PyTorch Neuron, the default behavior is for torch.float (FP32) and torch.double (FP64) tensors
to be mapped to torch.float in hardware. To reduce memory footprint and improve performance,
torch.float and torch.double tensors can automatically be converted to BFloat16 by setting
the environment variable ``XLA_USE_BF16=1``. Alternatively, torch.float can automatically be converted
to BFloat16 and torch.double converted to FP32 by setting the environment variable ``XLA_DOWNCAST_BF16=1``.
Automatic Mixed-Precision
-------------------------
BF16 mixed-precision using PyTorch Autocast
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, the compiler automatically cast internal FP32 operations to
BF16. You can disable this and allow PyTorch's BF16 mixed-precision to
do the casting. PyTorch's BF16 mixed-precision is achieved by casting
certain operations to operate BF16. We currently use CUDA's list of
operations that can operate in BF16:
(NOTE: Although convolution is in the list below, it is currently unsupported by Neuron. See :ref:`model-architecture-fit`)
.. code:: bash
_convolution
_convolution
_convolution_nogroup
conv1d
conv2d
conv3d
conv_tbc
conv_transpose1d
conv_transpose2d
conv_transpose3d
convolution
cudnn_convolution
cudnn_convolution_transpose
cudnn_convolution
cudnn_convolution_transpose
cudnn_convolution
cudnn_convolution_transpose
prelu
addmm
addmv
addr
matmul
mm
mv
linear
addbmm
baddbmm
bmm
chain_matmul
linalg_multi_dot
To enable PyTorch's BF16 mixed-precision, first turn off the Neuron
compiler auto-cast:
.. code:: python
os.environ["NEURON_CC_FLAGS"] = "--auto-cast=none"
Next, overwrite torch.cuda.is_bf16_supported to return True:
.. code:: python
torch.cuda.is_bf16_supported = lambda: True
Next, per recommendation from official PyTorch documentation, place only
the forward-pass of the training step in the torch.autocast scope:
.. code:: python
with torch.autocast(dtype=torch.bfloat16, device_type='cuda'):
# forward pass
The device type is CUDA because we are using CUDA's list of BF16
compatible operations as mentioned above.
Example showing the original training code snippet:
.. code:: python
def train_loop_fn(train_loader):
for i, data in enumerate(train_loader):
inputs = data[0]
labels = data[3]
outputs = model(inputs, labels=labels)
loss = outputs.loss/ flags.grad_acc_steps
loss.backward()
optimizer.step()
xm.mark_step()
The following shows the training loop modified to use BF16 autocast:
.. code:: python
os.environ["NEURON_CC_FLAGS"] = "--auto-cast=none"
def train_loop_fn(train_loader):
for i, data in enumerate(train_loader):
torch.cuda.is_bf16_supported = lambda: True
with torch.autocast(dtype=torch.bfloat16, device_type='cuda'):
inputs = data[0]
labels = data[3]
outputs = model(inputs, labels=labels)
loss = outputs.loss/ flags.grad_acc_steps
loss.backward()
optimizer.step()
xm.mark_step()
For a full example of BF16 mixed-precision, see :ref:`PyTorch Neuron BERT Pretraining Tutorial <hf-bert-pretraining-tutorial>`.
See official PyTorch documentation for more details about
`torch.autocast <https://pytorch.org/docs/stable/amp.html#autocasting>`__
.
Tips and Best Practices
-----------------------
Understand the lazy mode in PyTorch Neuron
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One significant difference between PyTorch Neuron and native PyTorch is
that the PyTorch Neuron system runs in lazy mode while the native
PyTorch runs in eager mode. Tensors in lazy mode are placeholders for
building the computational graph until they are materialized after the
compilation and evaluation are complete. The PyTorch Neuron system
builds the computational graph on the fly when you call PyTorch APIs to
build the computation using tensors and operators. The computational
graph gets compiled and executed when ``xm.mark_step()`` is called
explicitly or implicitly by ``pl.MpDeviceLoader/pl.ParallelLoader``, or
when you explicitly request the value of a tensor such as by calling
``loss.item()`` or ``print(loss)``.
.. _minimize-the-number-of-compilation-and-executions:
Minimize the number of compilation-and-executions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For best performance, you should keep in mind the possible ways to
initiate compilation-and-executions as described in `Understand the lazy
mode in PyTorch/XLA <#understand-the-lazy-mode-in-pytorch-neuron>`__ and
should try to minimize the number of compilation-and-executions.
Ideally, only one compilation-and-execution is necessary per training
iteration and is initiated automatically by
``pl.MpDeviceLoader/pl.ParallelLoader``. The ``MpDeviceLoader`` is
optimized for XLA and should always be used if possible for best
performance. During training, you might want to examine some
intermediate results such as loss values. In such case, the printing of
lazy tensors should be wrapped using ``xm.add_step_closure()`` to avoid
unnecessary compilation-and-executions.
Ensure common initial weights across workers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To achieve best accuracy during data parallel training, all workers need
to have the same initial parameter states. This can be achieved by using
the same seed across the workers. In the case of HuggingFace library,
the set_seed function can be used.
(https://github.com/pytorch/xla/issues/3216).
Use PyTorch/XLA's model save function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To avoid problems with saving and loading checkpoints, make sure you use
PyTorch/XLA's model save function to properly checkpoint your model. For
more information about the function, see
`torch_xla.core.xla_model.save <https://pytorch.org/xla/release/1.9/index.html#torch_xla.core.xla_model.save>`__
in the *PyTorch on XLA Devices* documentation.
When training using multiple devices, ``xla_model.save`` can result in high host memory usage. If you see such high usage
causing the host to run out of memory, please use `torch_xla.utils.serialization.save <https://pytorch.org/xla/release/1.9/index.html#torch_xla.utils.serialization.save>`__ .
This would save the model in a serialized manner. When saved using the ``serialization.save`` api, the model should
be loaded using ``serialization.load`` api. More information on this here: `Saving and Loading Tensors <https://pytorch.org/xla/release/1.9/index.html#saving-and-loading-xla-tensors>`__
FAQ
---
What is the difference between Trainium and Inferentia?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Trainium is an accelerator designed to speed up training, whereas
Inferentia is an accelerator designed to speed up inference.
Debugging and troubleshooting
-----------------------------
To debug on PyTorch Neuron, please follow the :ref:`debug
guide <./pytorch-neuron-debug.html>`.
</pre></body></html>
|
2023-09-29T20:54:48.721Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.rst.txt
|
```
.. _torch-neuronx-dev-guide:
Developer Guide for Profiling with PyTorch Neuron (``torch-neuronx``)
=====================================================================
.. contents:: Table of Contents
:local:
:depth: 2
Introduction
~~~~~~~~~~~~
The Neuron PyTorch profiler is a context manager wrapping around the entire model
and training loop. Specifically this is the context manager:
``torch_neuronx.experimental.profiler.profile``. This is a wrapper of
the XLA Debug Profiler which we imported earlier as
``import torch_xla.debug.profiler as xp``, and is backwards-compatible.
Here are the parameters of the profiler context manager:
1. ``port``: Port to run the profiling GRPC server on. Default is 9012.
2. ``profile_type``: There is “trace” and “operator”. “trace”
is the Torch Runtime Trace Level, while “operator” is the Model
Operator Trace Level.
3. ``ms_duration``: This defines how long the profiler will capture the
HLO artifacts from the model to view in the profiler. The unit is in
milliseconds.
4. ``neuron_tensorboard_plugin_dir``: The directory the neuron tensorboard plugin will file write to
(NB: Assumes that the tensorboard logdir="log/")
5. ``delete_working``: If set to False turns off the deletion of temporary files (default True)
We move the model to the xla device *inside the context manager.* This is important,
as this allows the profiler to collect the operations and processes from the
``neuronx-cc`` compiler artifacts. If the model is moved to the xla device outside of
the context manager, the profiling won't work.
.. note::
The warnings about the ``XLA_IR_DEBUG`` and ``XLA_HLO_DEBUG``
env vars not being set can be ignored for the most part. This warning
only comes into play when compiling the model for Neuron outside of the
profiler context manager.
After running this script, notice a ``./logs`` directory has been
created. It contains the TensorBoard logs including the
profiler views.
Example used in this guide
~~~~~~~~~~~~~~~~~~~~~~~~~~
We will use the following code sample to describe in detail how to use the Neuron PyTorch profiling API.
Prerequisites
^^^^^^^^^^^^^
1. Initial `Trn1 setup for PyTorch
(torch-neuronx) <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/setup/pytorch-install.html>`__
has been done
Environment
^^^^^^^^^^^
::
#activate python virtual environment and install tensorboard_plugin_neuron
source ~/aws_neuron_venv_pytorch_p38/bin/activate
pip install tensorboard_plugin_neuronx
#create work directory for the Neuron Profiling tutorials
mkdir -p ~/neuron_profiling_tensorboard_examples
cd ~/neuron_profiling_tensorboard_examples
Setup
^^^^^
Create a new working directory:
::
mkdir simple_demo
cd simple_demo
Save the following code as ``demo.py``:
::
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
# XLA imports
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.profiler as xp
import torch_neuronx
from torch_neuronx.experimental import profiler
os.environ["NEURON_CC_FLAGS"] = "--cache_dir=./compiler_cache"
# Global constants
EPOCHS = 10
# Declare 3-layer MLP Model
class MLP(nn.Module):
def __init__(self, input_size = 10, output_size = 2, layers = [5, 5]):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, layers[0])
self.fc2 = nn.Linear(layers[0], layers[1])
self.fc3 = nn.Linear(layers[1], output_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
def main():
# Fix the random number generator seeds for reproducibility
torch.manual_seed(0)
# XLA: Specify XLA device (defaults to a NeuronCore on Trn1 instance)
device = xm.xla_device()
# Start the proflier context-manager
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='trace',
ms_duration=15000 ) as profiler:
# IMPORTANT: the model has to be transferred to XLA within
# the context manager, otherwise profiling won't work
model = MLP().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
loss_fn = torch.nn.NLLLoss()
# start training loop
print('----------Training ---------------')
model.train()
for epoch in range(EPOCHS):
optimizer.zero_grad()
train_x = torch.randn(1,10).to(device)
train_label = torch.tensor([1]).to(device)
#forward
loss = loss_fn(model(train_x), train_label)
#back
loss.backward()
optimizer.step()
# XLA: collect ops and run them in XLA runtime
xm.mark_step()
print('----------End Training ---------------')
if __name__ == '__main__':
main()
Then run it!
::
python demo.py
.. _Tensorboard Interface Overview:
Viewing the Trace on TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To view the TensorBoard logs, run ``tensorboard --logdir=./logs``
.. note::
Depending on TensorBoard version ``--load_fast=false`` might be an additional
parameter to add to view the trace.
Take note of the port (usually 6006) and enter ``localhost:<port>`` into
the local browser (assuming port forwarding is set up properly):
|tensorboard-url-image|
Once ``localhost:<port>`` is entered, verify that the
“NEURON” view is shown:
|tensorboard-NEURON-header|
If “NEURON” isn’t shown on the
top left hand side, select “NEURON” from the drop down on the top right
hand side
|tensorboard-NEURON-dropdown|
On the Left Hand Side, there are two dropdown menus: Run & Tool.
|tensorboard-run-tool-dropdowns|
The Run dropdown would contain the Torch Runtime
Trace and Operator Level Trace views; however since we only ran the
“trace” (i.e Torch Runtime Trace Level), we’ll only see that log.
The Torch Runtime Trace views are simply dates in
``year_month_day_hour_minute_second_millisecond`` format. The Tool
Dropdown only contains the “trace“ option.
The trace view should look like this:
|tensorboard-run-trace-original|
Let’s zoom into the following section of the trace:
|tensorboard-run-trace-selected-section|
After zooming in the trace should look like this:
|tensorboard-run-trace-selected-section-zoomed|
Notice on the top, there is a ``StepMarker`` process followed by ``NeuronDevice Execution``
process. This correlates to the ``xm.mark_step()`` call which executes
the collected graph of our model on Neuron. For the Operator Level Trace
(“operator”), we’ll be profiling the model operators that occur on
Neuron. In other words, the profiler will zoom into the
``NeuronDevice Execution`` process, if the user specifies
``profile_type='trace'``.
Using Named Blocks for the Trace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
What we've produced so far is the default behavior of the profiler, however
it would be more useful to profile specific blocks of our code to narrow down onto
performance bottlenecks. To do this, use ``xp.Trace`` context manager.
Replace the respective code in the training loop with the following:
::
...
optimizer.zero_grad()
train_x = torch.randn(1,10).to(device)
train_label = torch.tensor([1]).to(device)
with xp.Trace("model_build"):
loss = loss_fn(model(train_x), train_label)
with xp.Trace("loss_backward"):
loss.backward()
with xp.Trace("optimizer_step"):
optimizer.step()
# XLA: collect ops and run them in XLA runtime
xm.mark_step()
...
Run the script, and follow the same TensorBoard steps. Afterwards, the
trace should look like this:
|tensorboard-run-trace-selected-section-zoomed-named-traces|
As seen, the ``model_build``, ``loss_backward`` and ``optimizer_step``
sections have been profiled.
.. note::
If you are running your training script in a docker container, to
view the tensorboard, you should launch the docker container using flag:
``—network host`` eg. ``docker run —network host my_image:my_tag``
.. |tensorboard-url-image| image:: /images/Neuron_Profiler_Tensorboard_Url.jpg
.. |tensorboard-NEURON-header| image:: /images/Neuron_Profiler_Tensorboard_Header.jpg
.. |tensorboard-NEURON-dropdown| image:: /images/Neuron_Profiler_Tensorboard_Dropdown.jpg
.. |tensorboard-run-tool-dropdowns| image:: /images/Neuron_Profiler_Tensorboard_Run_Tool_Dropdowns.jpg
.. |tensorboard-run-trace-original| image:: /images/Neuron_Profiler_Runtime_Trace_Original.jpg
.. |tensorboard-run-trace-selected-section| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection.jpg
.. |tensorboard-run-trace-selected-section-zoomed| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed.jpg
.. |tensorboard-run-trace-selected-section-zoomed-named-traces| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed_Named_Traces.jpg
.. |tensorboard-operator-framework-view| image:: /images/Neuron_Profiler_T1_Op_Framework_View.png
.. |tensorboard-operator-hlo-view| image:: /images/Neuron_Profiler_T1_Op_HLO_View.png
.. |tensorboard-operator-trace-view| image:: /images/Neuron_Profiler_T1_Op_Trace_View.png
.. |tensorboard-operator-trace-fusion-simple| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Simple.png
.. |tensorboard-operator-trace-fusion-complex| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Complex.png
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _torch-neuronx-dev-guide:
Developer Guide for Profiling with PyTorch Neuron (``torch-neuronx``)
=====================================================================
.. contents:: Table of Contents
:local:
:depth: 2
Introduction
~~~~~~~~~~~~
The Neuron PyTorch profiler is a context manager wrapping around the entire model
and training loop. Specifically this is the context manager:
``torch_neuronx.experimental.profiler.profile``. This is a wrapper of
the XLA Debug Profiler which we imported earlier as
``import torch_xla.debug.profiler as xp``, and is backwards-compatible.
Here are the parameters of the profiler context manager:
1. ``port``: Port to run the profiling GRPC server on. Default is 9012.
2. ``profile_type``: There is “trace” and “operator”. “trace”
is the Torch Runtime Trace Level, while “operator” is the Model
Operator Trace Level.
3. ``ms_duration``: This defines how long the profiler will capture the
HLO artifacts from the model to view in the profiler. The unit is in
milliseconds.
4. ``neuron_tensorboard_plugin_dir``: The directory the neuron tensorboard plugin will file write to
(NB: Assumes that the tensorboard logdir="log/")
5. ``delete_working``: If set to False turns off the deletion of temporary files (default True)
We move the model to the xla device *inside the context manager.* This is important,
as this allows the profiler to collect the operations and processes from the
``neuronx-cc`` compiler artifacts. If the model is moved to the xla device outside of
the context manager, the profiling won't work.
.. note::
The warnings about the ``XLA_IR_DEBUG`` and ``XLA_HLO_DEBUG``
env vars not being set can be ignored for the most part. This warning
only comes into play when compiling the model for Neuron outside of the
profiler context manager.
After running this script, notice a ``./logs`` directory has been
created. It contains the TensorBoard logs including the
profiler views.
Example used in this guide
~~~~~~~~~~~~~~~~~~~~~~~~~~
We will use the following code sample to describe in detail how to use the Neuron PyTorch profiling API.
Prerequisites
^^^^^^^^^^^^^
1. Initial `Trn1 setup for PyTorch
(torch-neuronx) <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/setup/pytorch-install.html>`__
has been done
Environment
^^^^^^^^^^^
::
#activate python virtual environment and install tensorboard_plugin_neuron
source ~/aws_neuron_venv_pytorch_p38/bin/activate
pip install tensorboard_plugin_neuronx
#create work directory for the Neuron Profiling tutorials
mkdir -p ~/neuron_profiling_tensorboard_examples
cd ~/neuron_profiling_tensorboard_examples
Setup
^^^^^
Create a new working directory:
::
mkdir simple_demo
cd simple_demo
Save the following code as ``demo.py``:
::
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
# XLA imports
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.debug.profiler as xp
import torch_neuronx
from torch_neuronx.experimental import profiler
os.environ["NEURON_CC_FLAGS"] = "--cache_dir=./compiler_cache"
# Global constants
EPOCHS = 10
# Declare 3-layer MLP Model
class MLP(nn.Module):
def __init__(self, input_size = 10, output_size = 2, layers = [5, 5]):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, layers[0])
self.fc2 = nn.Linear(layers[0], layers[1])
self.fc3 = nn.Linear(layers[1], output_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
def main():
# Fix the random number generator seeds for reproducibility
torch.manual_seed(0)
# XLA: Specify XLA device (defaults to a NeuronCore on Trn1 instance)
device = xm.xla_device()
# Start the proflier context-manager
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='trace',
ms_duration=15000 ) as profiler:
# IMPORTANT: the model has to be transferred to XLA within
# the context manager, otherwise profiling won't work
model = MLP().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
loss_fn = torch.nn.NLLLoss()
# start training loop
print('----------Training ---------------')
model.train()
for epoch in range(EPOCHS):
optimizer.zero_grad()
train_x = torch.randn(1,10).to(device)
train_label = torch.tensor([1]).to(device)
#forward
loss = loss_fn(model(train_x), train_label)
#back
loss.backward()
optimizer.step()
# XLA: collect ops and run them in XLA runtime
xm.mark_step()
print('----------End Training ---------------')
if __name__ == '__main__':
main()
Then run it!
::
python demo.py
.. _Tensorboard Interface Overview:
Viewing the Trace on TensorBoard
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To view the TensorBoard logs, run ``tensorboard --logdir=./logs``
.. note::
Depending on TensorBoard version ``--load_fast=false`` might be an additional
parameter to add to view the trace.
Take note of the port (usually 6006) and enter ``localhost:<port>`` into
the local browser (assuming port forwarding is set up properly):
|tensorboard-url-image|
Once ``localhost:<port>`` is entered, verify that the
“NEURON” view is shown:
|tensorboard-NEURON-header|
If “NEURON” isn’t shown on the
top left hand side, select “NEURON” from the drop down on the top right
hand side
|tensorboard-NEURON-dropdown|
On the Left Hand Side, there are two dropdown menus: Run & Tool.
|tensorboard-run-tool-dropdowns|
The Run dropdown would contain the Torch Runtime
Trace and Operator Level Trace views; however since we only ran the
“trace” (i.e Torch Runtime Trace Level), we’ll only see that log.
The Torch Runtime Trace views are simply dates in
``year_month_day_hour_minute_second_millisecond`` format. The Tool
Dropdown only contains the “trace“ option.
The trace view should look like this:
|tensorboard-run-trace-original|
Let’s zoom into the following section of the trace:
|tensorboard-run-trace-selected-section|
After zooming in the trace should look like this:
|tensorboard-run-trace-selected-section-zoomed|
Notice on the top, there is a ``StepMarker`` process followed by ``NeuronDevice Execution``
process. This correlates to the ``xm.mark_step()`` call which executes
the collected graph of our model on Neuron. For the Operator Level Trace
(“operator”), we’ll be profiling the model operators that occur on
Neuron. In other words, the profiler will zoom into the
``NeuronDevice Execution`` process, if the user specifies
``profile_type='trace'``.
Using Named Blocks for the Trace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
What we've produced so far is the default behavior of the profiler, however
it would be more useful to profile specific blocks of our code to narrow down onto
performance bottlenecks. To do this, use ``xp.Trace`` context manager.
Replace the respective code in the training loop with the following:
::
...
optimizer.zero_grad()
train_x = torch.randn(1,10).to(device)
train_label = torch.tensor([1]).to(device)
with xp.Trace("model_build"):
loss = loss_fn(model(train_x), train_label)
with xp.Trace("loss_backward"):
loss.backward()
with xp.Trace("optimizer_step"):
optimizer.step()
# XLA: collect ops and run them in XLA runtime
xm.mark_step()
...
Run the script, and follow the same TensorBoard steps. Afterwards, the
trace should look like this:
|tensorboard-run-trace-selected-section-zoomed-named-traces|
As seen, the ``model_build``, ``loss_backward`` and ``optimizer_step``
sections have been profiled.
.. note::
If you are running your training script in a docker container, to
view the tensorboard, you should launch the docker container using flag:
``—network host`` eg. ``docker run —network host my_image:my_tag``
.. |tensorboard-url-image| image:: /images/Neuron_Profiler_Tensorboard_Url.jpg
.. |tensorboard-NEURON-header| image:: /images/Neuron_Profiler_Tensorboard_Header.jpg
.. |tensorboard-NEURON-dropdown| image:: /images/Neuron_Profiler_Tensorboard_Dropdown.jpg
.. |tensorboard-run-tool-dropdowns| image:: /images/Neuron_Profiler_Tensorboard_Run_Tool_Dropdowns.jpg
.. |tensorboard-run-trace-original| image:: /images/Neuron_Profiler_Runtime_Trace_Original.jpg
.. |tensorboard-run-trace-selected-section| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection.jpg
.. |tensorboard-run-trace-selected-section-zoomed| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed.jpg
.. |tensorboard-run-trace-selected-section-zoomed-named-traces| image:: /images/Neuron_Profiler_Runtime_Trace_Section_Selection_Zoomed_Named_Traces.jpg
.. |tensorboard-operator-framework-view| image:: /images/Neuron_Profiler_T1_Op_Framework_View.png
.. |tensorboard-operator-hlo-view| image:: /images/Neuron_Profiler_T1_Op_HLO_View.png
.. |tensorboard-operator-trace-view| image:: /images/Neuron_Profiler_T1_Op_Trace_View.png
.. |tensorboard-operator-trace-fusion-simple| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Simple.png
.. |tensorboard-operator-trace-fusion-complex| image:: /images/Neuron_Profiler_T1_Op_Trace_Fusion_Complex.png</pre></body></html>
|
2023-09-29T20:54:48.806Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.rst.txt
|
```
.. _tensorflow-servingx-neuronrt-visible-cores:
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
=====================================================
TensorFlow serving allows customers to scale-up inference workloads
across a network. TensorFlow Neuron Serving uses the same API as normal
TensorFlow Serving with two differences: (a) the saved model must be
compiled for neuron and (b) the entry point is a different binary
named ``tensorflow_model_server_neuron``. The binary is found at
``/usr/local/bin/tensorflow_model_server_neuron`` and is pre-installed
in the DLAMI or installed with APT/YUM tensorflow-model-server-neuronx package.
Install TensorFlow Model Server and Serving API
-----------------------------------------------
Follow the steps in the :ref:`install-neuronx-tensorflow`.
Then ensure you install using either apt-get or yum.
.. code:: bash
sudo apt-get install tensorflow-model-server-neuronx
or
.. code:: bash
sudo yum install tensorflow-model-server-neuronx
Also, you would need TensorFlow Serving API (use --no-deps to prevent
installation of regular tensorflow).
.. code:: bash
pip install --no-deps tensorflow_serving_api
For the example image preprocessing using Keras preprocessing, the
Python Imaging Library Pillow is required:
.. code:: bash
pip install pillow
To workaround h5py issue https://github.com/aws/aws-neuron-sdk/issues/220:
.. code:: bash
pip install "h5py<3.0.0"
Export and Compile Saved Model
------------------------------
The following example shows graph construction followed by the addition
of Neuron compilation step before exporting to saved model.
.. code:: python
import tensorflow as tf
import tensorflow_neuronx as tfnx
import numpy as np
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
image_sizes = [224, 224]
model = tf.keras.applications.ResNet50(weights='imagenet')
example_inputs = tf.random.uniform([1, *image_sizes, 3], dtype=tf.float32)
model_neuron = tfnx.trace(model, example_inputs)
# run the model once to define the forward pass and allow for saving
model_neuron(example_inputs)
tf.keras.models.save_model(model_neuron, './resnet50_neuron/1')
Serving Saved Model
-------------------
User can now serve the saved model with the
tensorflow_model_server_neuron binary. To utilize multiple NeuronCores,
it is recommended to launch multiple tensorflow model servers that
listen to the same gRPC port:
.. code:: bash
export NEURON_RT_VISIBLE_CORES=0 # important to set this environment variable before launching model servers
tensorflow_model_server_neuron --model_name=resnet50_neuron \
--model_base_path=$(pwd)/resnet50_neuron/ --port=8500
# then to run another server on a different neuron core open another
# window and run this, except this time set NEURON_RT_VISIBLE_CORES=1
# you can keep doing this up to the number of Neuron Cores on your machine
export NEURON_RT_VISIBLE_CORES=1
tensorflow_model_server_neuron --model_name=resnet50_neuron \
--model_base_path=$(pwd)/resnet50_neuron/ --port=8500
The compiled model is staged in neuron DRAM by the server to prepare
for inference.
Generate inference requests to the model server
-----------------------------------------------
Now run inferences via GRPC as shown in the following sample client
code:
.. code:: python
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
from tensorflow.keras.applications.resnet50 import decode_predictions
tf.keras.backend.set_image_data_format('channels_last')
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_neuron'
request.inputs['input_1'].CopyFrom(
tf.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output_1'])
print(decode_predictions(prediction))
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-servingx-neuronrt-visible-cores:
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
=====================================================
TensorFlow serving allows customers to scale-up inference workloads
across a network. TensorFlow Neuron Serving uses the same API as normal
TensorFlow Serving with two differences: (a) the saved model must be
compiled for neuron and (b) the entry point is a different binary
named ``tensorflow_model_server_neuron``. The binary is found at
``/usr/local/bin/tensorflow_model_server_neuron`` and is pre-installed
in the DLAMI or installed with APT/YUM tensorflow-model-server-neuronx package.
Install TensorFlow Model Server and Serving API
-----------------------------------------------
Follow the steps in the :ref:`install-neuronx-tensorflow`.
Then ensure you install using either apt-get or yum.
.. code:: bash
sudo apt-get install tensorflow-model-server-neuronx
or
.. code:: bash
sudo yum install tensorflow-model-server-neuronx
Also, you would need TensorFlow Serving API (use --no-deps to prevent
installation of regular tensorflow).
.. code:: bash
pip install --no-deps tensorflow_serving_api
For the example image preprocessing using Keras preprocessing, the
Python Imaging Library Pillow is required:
.. code:: bash
pip install pillow
To workaround h5py issue https://github.com/aws/aws-neuron-sdk/issues/220:
.. code:: bash
pip install "h5py<3.0.0"
Export and Compile Saved Model
------------------------------
The following example shows graph construction followed by the addition
of Neuron compilation step before exporting to saved model.
.. code:: python
import tensorflow as tf
import tensorflow_neuronx as tfnx
import numpy as np
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
image_sizes = [224, 224]
model = tf.keras.applications.ResNet50(weights='imagenet')
example_inputs = tf.random.uniform([1, *image_sizes, 3], dtype=tf.float32)
model_neuron = tfnx.trace(model, example_inputs)
# run the model once to define the forward pass and allow for saving
model_neuron(example_inputs)
tf.keras.models.save_model(model_neuron, './resnet50_neuron/1')
Serving Saved Model
-------------------
User can now serve the saved model with the
tensorflow_model_server_neuron binary. To utilize multiple NeuronCores,
it is recommended to launch multiple tensorflow model servers that
listen to the same gRPC port:
.. code:: bash
export NEURON_RT_VISIBLE_CORES=0 # important to set this environment variable before launching model servers
tensorflow_model_server_neuron --model_name=resnet50_neuron \
--model_base_path=$(pwd)/resnet50_neuron/ --port=8500
# then to run another server on a different neuron core open another
# window and run this, except this time set NEURON_RT_VISIBLE_CORES=1
# you can keep doing this up to the number of Neuron Cores on your machine
export NEURON_RT_VISIBLE_CORES=1
tensorflow_model_server_neuron --model_name=resnet50_neuron \
--model_base_path=$(pwd)/resnet50_neuron/ --port=8500
The compiled model is staged in neuron DRAM by the server to prepare
for inference.
Generate inference requests to the model server
-----------------------------------------------
Now run inferences via GRPC as shown in the following sample client
code:
.. code:: python
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
from tensorflow.keras.applications.resnet50 import decode_predictions
tf.keras.backend.set_image_data_format('channels_last')
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_neuron'
request.inputs['input_1'].CopyFrom(
tf.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output_1'])
print(decode_predictions(prediction))
</pre></body></html>
|
2023-09-29T20:54:48.840Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.rst.txt
|
```
.. _setup-trn1-multi-node-execution:
How to prepare trn1.32xlarge for multi-node execution
=====================================================
EFA is a low latency transport that is used for inter-node communication. Multi-node jobs, such as distributed training, requires EFA to be enabled on every participating trn1/trn1n 32xlarge instance. Please note that EFA is currently not available on the smaller instances sizes and they cannot be used for running multi-node jobs.
trn1.32xlarge has 8 EFA devices, trn1n.32xlarge has 16 EFA devices. The rest of the document will refer to trn1.32xlarge but everything in the document also applies to trn1n.32xlarge except for the different number of EFA devices.
Launching an instance
^^^^^^^^^^^^^^^^^^^^^
Before launching trn1 you need to create a security group that allows EFA traffic between the instances. Follow Step1 here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa-start.html#efa-start-security and note the newly created security group ID. It will be used on the next step.
Determine the region, the AMI, the key and the subnet that will be used to launch trn1.
At the moment launching Trn1 instances with EFA support from the console is not recommended. The instances must be launched using AWS CLI. To launch trn1.32xlarge instance:
.. code-block:: bash
export AMI=<ami>
export SUBNET=<subnet id>
export SG=<security group created on the previous step>
export REG=<AWS region>
export KEY=<the key>
aws ec2 run-instances --region ${REG} \
--image-id ${AMI} --instance-type trn1.32xlarge \
--key-name ${KEY} \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=\"friendly name\"}]" \
--network-interfaces \
"NetworkCardIndex=0,DeviceIndex=0,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=1,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=2,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=3,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=4,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=5,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=6,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=7,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa"
Note that one of the cards is assigned DeviceIndex 0 and the rest are assigned DeviceIndex 1. Cloud-init will configure instance routing to route outgoing traffic prioritized by the device index field. I.e the outbound traffic will always egress from the interface with DeviceIndex 0. That avoids network connectivity problems when multiple interfaces are attached to the same subnet.
To launch trn1n.32xlarge instance:
.. code-block:: bash
export AMI=<ami>
export SUBNET=<subnet id>
export SG=<security group created on the previous step>
export REG=<AWS region>
export KEY=<the key>
aws ec2 run-instances --region ${REG} \
--image-id ${AMI} --instance-type trn1.32xlarge \
--key-name ${KEY} \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=\"friendly name\"}]" \
--network-interfaces \
NetworkCardIndex=0,DeviceIndex=0,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=1,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=2,DeviceIndex=2,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=3,DeviceIndex=3,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=4,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=5,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=6,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=7,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=8,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=9,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=10,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=11,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=12,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=13,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=14,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=15,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa
Assigning public IP address
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Multi-interface instances are not assigned public IP automatically. If you require access to the newly launched trn1 from the Internet you need to assign Elastic IP to the interface with DeviceIndex = 0. To find the right interface either parse the output of the instance launch command or use describe-instances command:
.. code-block:: bash
$ aws ec2 describe-instances --instance-ids i-01b17afa1e6021d6c
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-01257e71ecb2f431c",
"InstanceId": "i-01b17afa1e6021d6c",
"InstanceType": "trn1.32xlarge",
.........
"NetworkInterfaces": [
{
"Attachment": {
"AttachTime": "2023-05-19T17:37:26.000Z",
"AttachmentId": "eni-attach-03730388baedd4b96",
"DeleteOnTermination": true,
"DeviceIndex": 0,
"Status": "attached",
"NetworkCardIndex": 4
},
"Description": "",
.........
"InterfaceType": "efa"
},
{
"Attachment": {
"AttachTime": "2023-05-19T17:37:26.000Z",
"AttachmentId": "eni-attach-0e1242371cd2532df",
"DeleteOnTermination": true,
"DeviceIndex": 0,
"Status": "attached",
"NetworkCardIndex": 3
},
"Description": "",
................
}
]
}
The second entry in “NetworkInterfaces” in this example has “DeviceIndex” 0 and should be used to attach EIP.
Software installation
^^^^^^^^^^^^^^^^^^^^^
The software required for EFA operation is distributed via aws-efa-installer package. The package is preinstalled on Neuron DLAMI. If you’d like to install the latest or if you are using your own AMI follow these steps:
.. code-block:: bash
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
wget https://efa-installer.amazonaws.com/aws-efa-installer.key && gpg --import aws-efa-installer.key
cat aws-efa-installer.key | gpg --fingerprint
wget https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz.sig && gpg --verify ./aws-efa-installer-latest.tar.gz.sig
tar -xvf aws-efa-installer-latest.tar.gz
cd aws-efa-installer && sudo bash efa_installer.sh --yes
cd
sudo rm -rf aws-efa-installer-latest.tar.gz aws-efa-installer
Containers
^^^^^^^^^^
aws-efa-installer package must be installed on the instance. That installs both the efa kernel module and the libraries. The libraries must be accessible to an application running inside a container. This can be accomplished by either installing aws-efa-installer package inside the container or by making on the instance library installation path available inside a container.
If installing aws-efa-installer package inside a container pass the flag that disables the kernel module installation:
.. code-block:: bash
sudo bash efa_installer.sh --yes --skip-kmod
The location of the libraries is distribution specific:
.. code-block:: bash
/opt/amazon/efa/lib # Ubuntu
/opt/amazon/efa/lib64 # AL2
Application execution environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running an application make sure the following environment variables are set:
.. code-block:: bash
FI_PROVIDER=efa
FI_EFA_USE_DEVICE_RDMA=1
FI_EFA_FORK_SAFE=1 # only required when running on AL2
Appendix - trn1 instance launch example script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
#!/bin/bash
set -e
# AWS CLI v2 Installation instructions for Linux:
# curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
# unzip awscliv2.zip
# sudo ./aws/install
# $ aws --version
# aws-cli/2.11.20 Python/3.11.3 Linux/5.15.0-1034-aws exe/x86_64.ubuntu.20 prompt/off
# Someone with AWS console admin privileges can create an access key ID and secret for this:
# Configure credentials: aws configure
# Search the AWS AMIs for the most recent "Deep Learning Base Neuron AMI (Ubuntu 20.04) <Latest_Date>"
# This one is 2023-05-17 - ami-01257e71ecb2f431c
AMI= ... # the ami
KEYNAME= ... # your key
SG= ... # the security group
SUBNET= ... # the subnet
REGION=us-west-2
# Launch instances
echo "Starting instances..."
output=$(aws ec2 --region $REGION run-instances \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=_Trainium-Big}]' \
--count 1 \
--image-id $AMI \
--instance-type trn1.32xlarge \
--key-name $KEYNAME \
--network-interfaces "NetworkCardIndex=0,DeviceIndex=0,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=1,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=2,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=3,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=4,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=5,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=6,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=7,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa")
# Parse the output to get the instance IDs
instance_ids=$(echo $output | jq -r .Instances[].InstanceId)
echo "Got created instance IDs: $instance_ids"
# Loop through each instance ID
public_ips=""
for instance_id in $instance_ids; do
echo "Waiting for instance $instance_id to be running..."
aws ec2 wait instance-running --instance-ids $instance_id --region $REGION
echo "Creating SSH public IP newtork inteface for instance $instance_id..."
interface_id=""
INSTANCE_INFO=$(aws ec2 describe-instances --region $REGION --instance-ids $instance_id)
OUTPUT=$(echo "$INSTANCE_INFO" | jq -r '.Reservations[0].Instances[0].NetworkInterfaces[] | "\(.Attachment.DeviceIndex),\(.NetworkInterfaceId)"')
echo $OUTPUT
for pair in $OUTPUT; do
IFS="," read -r device_idx ni_id <<< $pair
if [ "$device_idx" == "0" ]; then
interface_id=$ni_id
break
fi
done
if [ "$interface_id" == "" ]; then
exit -1
fi
echo $interface_id
echo "Checking for unassociated Elastic IPs..."
unassociated_eips=$(aws ec2 describe-addresses --region $REGION | jq -r '.Addresses[] | select(.AssociationId == null) | .AllocationId')
if [[ -z "$unassociated_eips" ]]; then
echo "No unassociated Elastic IPs found. Allocating new Elastic IP..."
eip_output=$(aws ec2 allocate-address --domain vpc --region $REGION)
eip_id=$(echo $eip_output | jq -r .AllocationId)
echo "Allocated Elastic IP ID: $eip_id"
eip_public_ip=$(echo $eip_output | jq -r .PublicIp)
echo "Allocated Elastic IP Public IP: $eip_public_ip"
echo "Note that this newly allocated Elasic IP will persist even after the instance termination"
echo "If the Elastic IP is not going to be reused do not forget to delete it"
else
# use the first unassociated Elastic IP found
eip_id=$(echo "$unassociated_eips" | head -n 1)
echo "Found unassociated Elastic IP ID: $eip_id"
eip_public_ip=$(aws ec2 describe-addresses --allocation-ids $eip_id --region $REGION | jq -r .Addresses[0].PublicIp)
echo "Elastic IP Public IP: $eip_public_ip"
fi
public_ips+="${eip_public_ip} "
echo "Associating Elastic IP with network interface $interface_id..."
aws ec2 associate-address --allocation-id $eip_id --network-interface-id $interface_id --region $REGION
echo "Associated Elastic IP with network interface."
done
echo "The instance has been launched.\nYou can now SSH into $public_ips with key $KEYNAME.\n"
.. note:: if you face connectivity issues after launching trn1\\trn1n 32xlarge instance on Ubuntu, please follow the troubleshooting instructions mentioned :ref:`here. <trn1_ubuntu_troubleshooting>`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _setup-trn1-multi-node-execution:
How to prepare trn1.32xlarge for multi-node execution
=====================================================
EFA is a low latency transport that is used for inter-node communication. Multi-node jobs, such as distributed training, requires EFA to be enabled on every participating trn1/trn1n 32xlarge instance. Please note that EFA is currently not available on the smaller instances sizes and they cannot be used for running multi-node jobs.
trn1.32xlarge has 8 EFA devices, trn1n.32xlarge has 16 EFA devices. The rest of the document will refer to trn1.32xlarge but everything in the document also applies to trn1n.32xlarge except for the different number of EFA devices.
Launching an instance
^^^^^^^^^^^^^^^^^^^^^
Before launching trn1 you need to create a security group that allows EFA traffic between the instances. Follow Step1 here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/efa-start.html#efa-start-security and note the newly created security group ID. It will be used on the next step.
Determine the region, the AMI, the key and the subnet that will be used to launch trn1.
At the moment launching Trn1 instances with EFA support from the console is not recommended. The instances must be launched using AWS CLI. To launch trn1.32xlarge instance:
.. code-block:: bash
export AMI=<ami>
export SUBNET=<subnet id>
export SG=<security group created on the previous step>
export REG=<AWS region>
export KEY=<the key>
aws ec2 run-instances --region ${REG} \
--image-id ${AMI} --instance-type trn1.32xlarge \
--key-name ${KEY} \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=\"friendly name\"}]" \
--network-interfaces \
"NetworkCardIndex=0,DeviceIndex=0,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=1,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=2,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=3,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=4,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=5,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=6,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa" \
"NetworkCardIndex=7,DeviceIndex=1,Groups=${SG},SubnetId=${SUBNET},InterfaceType=efa"
Note that one of the cards is assigned DeviceIndex 0 and the rest are assigned DeviceIndex 1. Cloud-init will configure instance routing to route outgoing traffic prioritized by the device index field. I.e the outbound traffic will always egress from the interface with DeviceIndex 0. That avoids network connectivity problems when multiple interfaces are attached to the same subnet.
To launch trn1n.32xlarge instance:
.. code-block:: bash
export AMI=<ami>
export SUBNET=<subnet id>
export SG=<security group created on the previous step>
export REG=<AWS region>
export KEY=<the key>
aws ec2 run-instances --region ${REG} \
--image-id ${AMI} --instance-type trn1.32xlarge \
--key-name ${KEY} \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=\"friendly name\"}]" \
--network-interfaces \
NetworkCardIndex=0,DeviceIndex=0,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=1,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=2,DeviceIndex=2,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=3,DeviceIndex=3,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=4,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=5,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=6,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=7,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=8,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=9,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=10,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=11,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=12,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=13,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=14,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa \
NetworkCardIndex=15,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa
Assigning public IP address
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Multi-interface instances are not assigned public IP automatically. If you require access to the newly launched trn1 from the Internet you need to assign Elastic IP to the interface with DeviceIndex = 0. To find the right interface either parse the output of the instance launch command or use describe-instances command:
.. code-block:: bash
$ aws ec2 describe-instances --instance-ids i-01b17afa1e6021d6c
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-01257e71ecb2f431c",
"InstanceId": "i-01b17afa1e6021d6c",
"InstanceType": "trn1.32xlarge",
.........
"NetworkInterfaces": [
{
"Attachment": {
"AttachTime": "2023-05-19T17:37:26.000Z",
"AttachmentId": "eni-attach-03730388baedd4b96",
"DeleteOnTermination": true,
"DeviceIndex": 0,
"Status": "attached",
"NetworkCardIndex": 4
},
"Description": "",
.........
"InterfaceType": "efa"
},
{
"Attachment": {
"AttachTime": "2023-05-19T17:37:26.000Z",
"AttachmentId": "eni-attach-0e1242371cd2532df",
"DeleteOnTermination": true,
"DeviceIndex": 0,
"Status": "attached",
"NetworkCardIndex": 3
},
"Description": "",
................
}
]
}
The second entry in “NetworkInterfaces” in this example has “DeviceIndex” 0 and should be used to attach EIP.
Software installation
^^^^^^^^^^^^^^^^^^^^^
The software required for EFA operation is distributed via aws-efa-installer package. The package is preinstalled on Neuron DLAMI. If you’d like to install the latest or if you are using your own AMI follow these steps:
.. code-block:: bash
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
wget https://efa-installer.amazonaws.com/aws-efa-installer.key && gpg --import aws-efa-installer.key
cat aws-efa-installer.key | gpg --fingerprint
wget https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz.sig && gpg --verify ./aws-efa-installer-latest.tar.gz.sig
tar -xvf aws-efa-installer-latest.tar.gz
cd aws-efa-installer && sudo bash efa_installer.sh --yes
cd
sudo rm -rf aws-efa-installer-latest.tar.gz aws-efa-installer
Containers
^^^^^^^^^^
aws-efa-installer package must be installed on the instance. That installs both the efa kernel module and the libraries. The libraries must be accessible to an application running inside a container. This can be accomplished by either installing aws-efa-installer package inside the container or by making on the instance library installation path available inside a container.
If installing aws-efa-installer package inside a container pass the flag that disables the kernel module installation:
.. code-block:: bash
sudo bash efa_installer.sh --yes --skip-kmod
The location of the libraries is distribution specific:
.. code-block:: bash
/opt/amazon/efa/lib # Ubuntu
/opt/amazon/efa/lib64 # AL2
Application execution environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running an application make sure the following environment variables are set:
.. code-block:: bash
FI_PROVIDER=efa
FI_EFA_USE_DEVICE_RDMA=1
FI_EFA_FORK_SAFE=1 # only required when running on AL2
Appendix - trn1 instance launch example script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: bash
#!/bin/bash
set -e
# AWS CLI v2 Installation instructions for Linux:
# curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
# unzip awscliv2.zip
# sudo ./aws/install
# $ aws --version
# aws-cli/2.11.20 Python/3.11.3 Linux/5.15.0-1034-aws exe/x86_64.ubuntu.20 prompt/off
# Someone with AWS console admin privileges can create an access key ID and secret for this:
# Configure credentials: aws configure
# Search the AWS AMIs for the most recent "Deep Learning Base Neuron AMI (Ubuntu 20.04) <Latest_Date>"
# This one is 2023-05-17 - ami-01257e71ecb2f431c
AMI= ... # the ami
KEYNAME= ... # your key
SG= ... # the security group
SUBNET= ... # the subnet
REGION=us-west-2
# Launch instances
echo "Starting instances..."
output=$(aws ec2 --region $REGION run-instances \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=_Trainium-Big}]' \
--count 1 \
--image-id $AMI \
--instance-type trn1.32xlarge \
--key-name $KEYNAME \
--network-interfaces "NetworkCardIndex=0,DeviceIndex=0,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=1,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=2,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=3,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=4,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=5,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=6,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa" \
"NetworkCardIndex=7,DeviceIndex=1,Groups=$SG,SubnetId=$SUBNET,InterfaceType=efa")
# Parse the output to get the instance IDs
instance_ids=$(echo $output | jq -r .Instances[].InstanceId)
echo "Got created instance IDs: $instance_ids"
# Loop through each instance ID
public_ips=""
for instance_id in $instance_ids; do
echo "Waiting for instance $instance_id to be running..."
aws ec2 wait instance-running --instance-ids $instance_id --region $REGION
echo "Creating SSH public IP newtork inteface for instance $instance_id..."
interface_id=""
INSTANCE_INFO=$(aws ec2 describe-instances --region $REGION --instance-ids $instance_id)
OUTPUT=$(echo "$INSTANCE_INFO" | jq -r '.Reservations[0].Instances[0].NetworkInterfaces[] | "\(.Attachment.DeviceIndex),\(.NetworkInterfaceId)"')
echo $OUTPUT
for pair in $OUTPUT; do
IFS="," read -r device_idx ni_id <<< $pair
if [ "$device_idx" == "0" ]; then
interface_id=$ni_id
break
fi
done
if [ "$interface_id" == "" ]; then
exit -1
fi
echo $interface_id
echo "Checking for unassociated Elastic IPs..."
unassociated_eips=$(aws ec2 describe-addresses --region $REGION | jq -r '.Addresses[] | select(.AssociationId == null) | .AllocationId')
if [[ -z "$unassociated_eips" ]]; then
echo "No unassociated Elastic IPs found. Allocating new Elastic IP..."
eip_output=$(aws ec2 allocate-address --domain vpc --region $REGION)
eip_id=$(echo $eip_output | jq -r .AllocationId)
echo "Allocated Elastic IP ID: $eip_id"
eip_public_ip=$(echo $eip_output | jq -r .PublicIp)
echo "Allocated Elastic IP Public IP: $eip_public_ip"
echo "Note that this newly allocated Elasic IP will persist even after the instance termination"
echo "If the Elastic IP is not going to be reused do not forget to delete it"
else
# use the first unassociated Elastic IP found
eip_id=$(echo "$unassociated_eips" | head -n 1)
echo "Found unassociated Elastic IP ID: $eip_id"
eip_public_ip=$(aws ec2 describe-addresses --allocation-ids $eip_id --region $REGION | jq -r .Addresses[0].PublicIp)
echo "Elastic IP Public IP: $eip_public_ip"
fi
public_ips+="${eip_public_ip} "
echo "Associating Elastic IP with network interface $interface_id..."
aws ec2 associate-address --allocation-id $eip_id --network-interface-id $interface_id --region $REGION
echo "Associated Elastic IP with network interface."
done
echo "The instance has been launched.\nYou can now SSH into $public_ips with key $KEYNAME.\n"
.. note:: if you face connectivity issues after launching trn1\\trn1n 32xlarge instance on Ubuntu, please follow the troubleshooting instructions mentioned :ref:`here. <trn1_ubuntu_troubleshooting>`
</pre></body></html>
|
2023-09-29T20:54:48.866Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.rst.txt
|
```
.. _pytorch-neuron-supported-operators:
PyTorch Neuron (``torch-neuronx``) - Supported Operators
========================================================
.. contents:: Table of Contents
:local:
:depth: 2
Operator support
~~~~~~~~~~~~~~~~
The following list the aten operators supported by torch-neuronx.
+----------------------------------+
| aten::_s_where |
+----------------------------------+
| aten::_softmax |
+----------------------------------+
| aten::_softmax_backward_data |
+----------------------------------+
| aten::_unsafe_view |
+----------------------------------+
| aten::add |
+----------------------------------+
| aten::addcdiv\_ |
+----------------------------------+
| aten::addcmul |
+----------------------------------+
| aten::addmm |
+----------------------------------+
| aten::bernoulli\_ |
+----------------------------------+
| aten::bmm |
+----------------------------------+
| aten::constant_pad_nd |
+----------------------------------+
| aten::div |
+----------------------------------+
| aten::embedding |
+----------------------------------+
| aten::embedding_dense_backward |
+----------------------------------+
| aten::empty |
+----------------------------------+
| aten::expand |
+----------------------------------+
| aten::fill\_ |
+----------------------------------+
| aten::index_select |
+----------------------------------+
| aten::_log_softmax |
+----------------------------------+
| aten::_log_softmax_backward_data |
+----------------------------------+
| aten::lt |
+----------------------------------+
| aten::mm |
+----------------------------------+
| aten::mul |
+----------------------------------+
| aten::native_batch_norm |
+----------------------------------+
| aten::native_batch_norm_backward |
+----------------------------------+
| aten::neg |
+----------------------------------+
| aten::permute |
+----------------------------------+
| aten::relu |
+----------------------------------+
| aten::rsub |
+----------------------------------+
| aten::select |
+----------------------------------+
| aten::slice |
+----------------------------------+
| aten::sqrt |
+----------------------------------+
| aten::sum |
+----------------------------------+
| aten::t |
+----------------------------------+
| aten::tanh |
+----------------------------------+
| aten::tanh_backward |
+----------------------------------+
| aten::threshold_backward |
+----------------------------------+
| aten::transpose |
+----------------------------------+
| aten::unsqueeze |
+----------------------------------+
| aten::view |
+----------------------------------+
| aten::zero\_ |
+----------------------------------+
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuron-supported-operators:
PyTorch Neuron (``torch-neuronx``) - Supported Operators
========================================================
.. contents:: Table of Contents
:local:
:depth: 2
Operator support
~~~~~~~~~~~~~~~~
The following list the aten operators supported by torch-neuronx.
+----------------------------------+
| aten::_s_where |
+----------------------------------+
| aten::_softmax |
+----------------------------------+
| aten::_softmax_backward_data |
+----------------------------------+
| aten::_unsafe_view |
+----------------------------------+
| aten::add |
+----------------------------------+
| aten::addcdiv\_ |
+----------------------------------+
| aten::addcmul |
+----------------------------------+
| aten::addmm |
+----------------------------------+
| aten::bernoulli\_ |
+----------------------------------+
| aten::bmm |
+----------------------------------+
| aten::constant_pad_nd |
+----------------------------------+
| aten::div |
+----------------------------------+
| aten::embedding |
+----------------------------------+
| aten::embedding_dense_backward |
+----------------------------------+
| aten::empty |
+----------------------------------+
| aten::expand |
+----------------------------------+
| aten::fill\_ |
+----------------------------------+
| aten::index_select |
+----------------------------------+
| aten::_log_softmax |
+----------------------------------+
| aten::_log_softmax_backward_data |
+----------------------------------+
| aten::lt |
+----------------------------------+
| aten::mm |
+----------------------------------+
| aten::mul |
+----------------------------------+
| aten::native_batch_norm |
+----------------------------------+
| aten::native_batch_norm_backward |
+----------------------------------+
| aten::neg |
+----------------------------------+
| aten::permute |
+----------------------------------+
| aten::relu |
+----------------------------------+
| aten::rsub |
+----------------------------------+
| aten::select |
+----------------------------------+
| aten::slice |
+----------------------------------+
| aten::sqrt |
+----------------------------------+
| aten::sum |
+----------------------------------+
| aten::t |
+----------------------------------+
| aten::tanh |
+----------------------------------+
| aten::tanh_backward |
+----------------------------------+
| aten::threshold_backward |
+----------------------------------+
| aten::transpose |
+----------------------------------+
| aten::unsqueeze |
+----------------------------------+
| aten::view |
+----------------------------------+
| aten::zero\_ |
+----------------------------------+
</pre></body></html>
|
2023-09-29T20:54:48.898Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/index.rst.txt
|
```
.. _tensorflow-neuron-main:
.. _tensorflow-neuron:
TensorFlow Neuron
=================
TensorFlow Neuron unlocks high-performance and cost-effective deep learning acceleration on AWS Trainium-based and Inferentia-based Amazon EC2 instances.
TensorFlow Neuron enables native TensorFlow models to be accelerated on Neuron devices, so you can use your existing framework application and get started easily with minimal code changes.
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/tensorflow-setup
.. toctree::
:maxdepth: 2
:hidden:
Inference (Inf2 & Trn1) </frameworks/tensorflow/tensorflow-neuronx-inference>
Inference (Inf1) </frameworks/tensorflow/tensorflow-neuron-inference>
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/training
.. card:: Tensorflow Neuron(``tensorflow-neuronx``) for Inference on ``Inf2`` & ``Trn1`` / ``Trn1n``
:link: inference-tensorflow-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Tensorflow Neuron(``tensorflow-neuron``) for Inference on ``Inf1``
:link: inference-tensorflow-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-neuron-main:
.. _tensorflow-neuron:
TensorFlow Neuron
=================
TensorFlow Neuron unlocks high-performance and cost-effective deep learning acceleration on AWS Trainium-based and Inferentia-based Amazon EC2 instances.
TensorFlow Neuron enables native TensorFlow models to be accelerated on Neuron devices, so you can use your existing framework application and get started easily with minimal code changes.
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/tensorflow-setup
.. toctree::
:maxdepth: 2
:hidden:
Inference (Inf2 & Trn1) </frameworks/tensorflow/tensorflow-neuronx-inference>
Inference (Inf1) </frameworks/tensorflow/tensorflow-neuron-inference>
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/training
.. card:: Tensorflow Neuron(``tensorflow-neuronx``) for Inference on ``Inf2`` & ``Trn1`` / ``Trn1n``
:link: inference-tensorflow-neuronx
:link-type: ref
:class-body: sphinx-design-class-title-small
.. card:: Tensorflow Neuron(``tensorflow-neuron``) for Inference on ``Inf1``
:link: inference-tensorflow-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small</pre></body></html>
|
2023-09-29T20:54:49.002Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-setup.rst.txt
|
```
.. _tf-setup:
Tensorflow Neuron Setup
=======================
.. include:: tensorflow-setup.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tf-setup:
Tensorflow Neuron Setup
=======================
.. include:: tensorflow-setup.txt</pre></body></html>
|
2023-09-29T20:54:49.039Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/torch/torch-neuronx/training-troubleshooting.rst.txt
|
```
.. _pytorch-neuron-traning-troubleshooting:
PyTorch Neuron (``torch-neuronx``) for Training Troubleshooting Guide
=====================================================================
.. contents:: Table of contents
:local:
:depth: 2
This document shows common issues users may encounter while using
PyTorch-Neuron and provides guidance how to resolve or work-around them.
General Troubleshooting
-----------------------
For setting up EFA that is needed for multi-node training, please see :ref:`setup-trn1-multi-node-execution`
For XLA-related troubleshooting notes see :ref:`How to debug models in PyTorch
Neuron <pytorch-neuronx-debug>`
and `PyTorch-XLA troubleshooting
guide <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md>`__.
If your multi-worker training run is interrupted, you may need to kill
all the python processes (WARNING: this kills all python processes and
reload the driver):
.. code:: bash
killall -9 python
killall -9 python3
sudo rmmod neuron; sudo modprobe neuron
To turn on RT debug:
.. code:: python
os.environ["NEURON_RT_LOG_LEVEL"] = "INFO"
To turn on Neuron NCCL debug:
.. code:: python
os.environ["NCCL_DEBUG"] = "WARN"
os.environ["NCCL_DEBUG_SUBSYS"] = "ALL"
If some process crashed during training, you can enable core dumps using ``ulimit`` command:
.. code:: bash
ulimit -S -c unlimited
To see the type of signals that would cause core dumps, see https://www.man7.org/linux/man-pages/man7/signal.7.html.
Note that core dumps take significant amount of storage, so make sure there is enough free disk space before enabling core dumps.
On Ubuntu, if Apport is not running, core dump file name is by default "core" in the local directory. To change file location and name format, modify ``/proc/sys/kernel/core_pattern`` (see https://www.kernel.org/doc/html/latest/admin-guide/sysctl/kernel.html#core-pattern for pattern info). For example, to dump to /tmp with executable filename and process ID:
.. code:: bash
echo '/tmp/core.%e.%p' | sudo tee /proc/sys/kernel/core_pattern
For containers, install appropriate dependencies during docker build ("apt-get update && apt-get -y install build-essential gdb") and start the container with "--ulimit core=-1" to enable core dump and "-v /tmp/:/tmp/" to ensure core dumps to /tmp are preserved when container is stopped or deleted. Dependencies can also be installed after container is started.
On Ubuntu, core dumps can also handled by Apport which is disabled by default. To enable Apport, run ``sudo service apport start``. The ``/proc/sys/kernel/core_pattern`` is updated by Apport service. After a crash, look in /var/log/apport.log for the core dump file name, which should be in located in /var/lib/apport/coredump/.
Once you have the core dump, you can use gdb to debug further (for Python applications, <executable> is ``python`` or ``python3``):
.. code:: bash
gdb <executable> <core file>
If some process (i.e. XRT server) is killed due to out-of-memory on host (i.e. you see "Out of memory: Killed process <PID>" in syslog or dmesg), there won't be any core dump generated. However, you can change to it to kernel panic mode to trigger core dump by setting ``/proc/sys/vm/panic_on_oom`` to value of 1 on the host or from inside container.
On the host where you need ``sudo`` (this change will be reflected inside the container also):
.. code:: bash
echo 1 | sudo tee /proc/sys/vm/panic_on_oom
From inside container where ``sudo`` doesn't work (this change will be reflected on the host also):
.. code:: bash
echo 1 > /proc/sys/vm/panic_on_oom
Possible Error Conditions
-------------------------
Non-Fatal Error OpKernel ('op: "TPU*" device_type: "CPU"')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
During execution using PyTorch Neuron, you may see these non-fatal error messages:
.. code:: bash
E tensorflow/core/framework/op_kernel.cc:1676] OpKernel ('op: "TPURoundRobin" device_type: "CPU"') for unknown op: TPURoundRobin
E tensorflow/core/framework/op_kernel.cc:1676] OpKernel ('op: "TpuHandleToProtoKey" device_type: "CPU"') for unknown op: TpuHandleToProtoKey
They don't affect operation of the PyTorch Neuron and can be ignored.
XLA runtime error: "Invalid argument: Cannot assign a device for operation"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: bash
RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:490 : Check failed: session->session()->Run(session_work->feed_inputs, session_work->outputs_handles, &outputs) == ::tensorflow::Status::OK() (INVALID_ARGUMENT: Cannot assign a device for operation XRTAllocateFromTensor: {{node XRTAllocateFromTensor}} was explicitly assigned to /job:localservice/replica:0/task:0/device:TPU:0 but available devices are [ /job:localservice/replica:0/task:0/device:CPU:0, /job:localservice/replica:0/task:0/device:TPU_SYSTEM:0, /job:localservice/replica:0/task:0/device:XLA_CPU:0 ]. Make sure the device specification refers to a valid device.
[[XRTAllocateFromTensor]] vs. OK)
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::util::MultiWait::Complete(std::function<void ()> const&)
clone
*** End stack trace ***
The above error indicates that the framework was not able to initialize the neuron runtime. If you get
the above error, check for the following:
1. No other process is taking the neuron cores. If yes, you may have to kill that process.
2. If no process is running, try reloading the driver using ``sudo rmmod neuron; sudo modprobe neuron``
Error: “Could not start gRPC server”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you get “Could not start gRPC server” error, please check if there
are any leftover python processes from a previous interrupted run and
terminate them before restarting run.
.. code:: bash
E0207 17:22:12.592127280 30834 server_chttp2.cc:40] {"created":"@1644254532.592081429","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/t
ransport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1644254532.592078907","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/s
rc/core/lib/iomgr/tcp_server_posix.cc","file_line":342,"referenced_errors":[{"created":"@1644254532.592072626","description":"Unable to configure socket","fd":10,"file":"external/com_github_grpc_grpc/src/c
ore/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":216,"referenced_errors":[{"created":"@1644254532.592068939","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1644254532.592078512","description":"Unable to configure socket"
,"fd":10,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":216,"referenced_errors":[{"created":"@1644254532.592077123","description":"Address already in
use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]}
2022-02-07 17:22:12.592170: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:545] Unknown: Could not start gRPC server
Failed compilation result in the cache
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All compilation results are by default saved in ``Neuron Persistent Cache``. If the Neuron Compiler
fails to compile a graph, we save the failed result in the cache. The reason for doing so is, if
the user tries to run the same script, we want the users to error out early rather than wait for
the compilation to progress and see an error at the later stage. However, there could be certain
cases under which a failed compilation may be do you some environment issues. One possible reason
of failure could be, during compilation the process went out of memory. This can happen if you are
running multiple processes in parallel such that not enough memory is available for compilation of
graph. Failure due to such reasons can be easily mitigated by re-running the compilation. In case,
you want to retry a failed compilation, you can do that by passing ``--retry_failed_compilation``
as follows:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --retry_failed_compilation'
This would retry the compilation and would replace a failed result in the cache with a
successful compilation result.
Compilation errors when placing NeuronCache home directory on NFS/EFS/FSx mounted drive
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Currently, NeuronCache default root directory is /var/tmp which is local to the instance you are running on. You can modify the location of the NeuronCache root directory using ``NEURON_CC_FLAGS='--cache_dir=<root dir>'``. However, when the NeuronCache directory is placed in a directory that is part of a NFS mounted drive shared among multiple instances, you may encounter file errors such as file not found, file corruption, or KeyError when running multi-instance training:
.. code:: bash
KeyError: 'neff_cache2/neuron-compile-cache/USER_neuroncc-1.0.48875.0+7437fbf18/MODULE_7223055628515330524/MODULE_0_SyncTensorsGraph.14_7223055628515330524_compute1-dy-kaena-training-2-1-e859998e-3035-5df63dab5ce63'
This is a result of limitations to file locking on NFS. EFS/FSx also exhibit similar limitation. The workaround is to setup separate NeuronCache root directories for each worker instance, such as ``NEURON_CC_FLAGS="--cache_dir=$HOME/neuron_cache/bert/`hostname`"``, where the home directory is shared among worker instances as in ParallelCluster.
Consider the use case of a ParallelCluster with SLURM cluster management. The home directory of the head node is shared via NFS with worker instances. Also, SLURM would terminate the idle worker instances when the cluster is configured as dynamic auto-scaling cluster, and the default cache in the terminated worker instance's /var/tmp is deleted. So to persist the cache across runs separated by a cluster idle period, we use the workaround above to create separate NeuronCache root directories for each worker instance. For example, see `BERT ParallelCluster script <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/training/dp_bert_hf_pretrain/run_dp_bert_large_hf_pretrain_bf16_s128.sh#L42>`__.
Compilation error: “Expect ap datatype to be of type float32 float16 bfloat16 uint8”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If an XLA example fails to run because of failed compilation and one of
the error messages is “Expect ap datatype to be of type float32 float16
bfloat16 uint8”, then please set the environment variable
``XLA_USE_32BIT_LONG=1`` in your script:
.. code:: python
os.environ['XLA_USE_32BIT_LONG'] = '1'
.. code:: bash
11/18/2021 04:51:25 PM WARNING 34567 [StaticProfiler]: matmul-based transposes inserted by penguin takes up 93.66 percent of all matmul computation
terminate called after throwing an instance of 'std::runtime_error'
what(): === BIR verification failed ===
Reason: Expect ap datatype to be of type float32 float16 bfloat16 uint8
Instruction: I-545-0
Opcode: Matmult
Input index: 0
Argument AP:
Access Pattern: [[1,8],[1,1],[1,1]]
Offset: 0
Memory Location: {compare.85-t604_i0}@SB<0,0>(8x2)#Internal DebugInfo: <compare.85||uint16||UNDEF||[8, 1, 1]>
NeuronCore(s) not available - Requested:1 Available:0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you see "NeuronCore(s) not available" please terminate processes
that may be holding the NeuronCores and terminate any neuron-top
sessions that are running. Also check if someone else is using the
system. Then do "sudo rmmod neuron; sudo modprobe neuron" to reload the
driver.
.. code:: bash
2021-Nov-15 15:21:28.0231 7245:7245 ERROR NRT:nrt_allocate_neuron_cores NeuronCore(s) not available - Requested:nc1-nc1 Available:0
2021-11-15 15:21:28.231864: F ./tensorflow/compiler/xla/service/neuron/neuron_runtime.h:1037] Check failed: status == NRT_SUCCESS NEURONPOC : nrt_init failed. Status = 1
Often when you run multi-worker training, there can be many python
processes leftover after a run is interrupted. To kill all python
processes, run the follow (WARNING: this kills all python processes on
the system) then reload the driver:
.. code:: bash
killall -9 python
killall -9 python3
sudo rmmod neuron; sudo modprobe neuron
TDRV error "TDRV:exec_consume_infer_status_notification"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see TDRV error "TDRV:exec_consume_infer_status_notification", try reloading the driver using ``sudo modprobe -r neuron; sudo modprobe neuron;``.
.. code:: bash
2022-Mar-10 18:51:19.07392022-Mar-10 18:51:19.0739 17821:17931 ERROR TDRV:exec_consume_infer_status_notifications 17822:18046 ERROR TDRV:exec_consume_infer_status_notifications Unexpected number of CC notifications: mod->cc_op_count=1, cc_start_cnt=0, cc_end_cnt=0Unexpected number of CC notifications: mod->cc_op_count=1, cc_start_cnt=0, cc_end_cnt=0
2022-Mar-10 18:51:19.07392022-Mar-10 18:51:19.0739 17821:17931 ERROR TDRV:exec_consume_infer_status_notifications 17822:18046 ERROR TDRV:exec_consume_infer_status_notifications (NON-FATAL, Ignoring) inference timeout (180000 ms) on Neuron Device 0 NC 0, waiting for cc status notifications.
(NON-FATAL, Ignoring) inference timeout (180000 ms) on Neuron Device 0 NC 1, waiting for cc status notifications.
TDRV error "TDRV:tdrv_one_tmpbuf_reserve Number of ONE TMPBUF pages requested exceeded the max number of pages allowed (requested: <N>, max allowed: 16)."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see the TDRV error "TDRV:tdrv_one_tmpbuf_reserve Number of ONE TMPBUF pages requested exceeded the max number of pages allowed (requested: <N>, max allowed: 16)", it maybe due to model tensors requiring more device memory then available. A solution is to try training with a smaller data batch size.
.. code:: bash
ERROR TDRV:tdrv_one_tmpbuf_reserve Number of ONE TMPBUF pages requested exceeded the max number of pages allowed (requested: 28, max allowed: 16).
ERROR TDRV:copy_and_stage_mr Failed to reserve one tmpbuf memory
ERROR TDRV:kbl_model_add copy_and_stage_mr() error
W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1669183391.155135683","description":"Error received from peer ipv4:172.31.58.24:43941","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
Could not open the ndX, close device failed, TDRV not initialized
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see error messages stating “Could not open the ndX” (where X is
an integer from 0..15), please run ``neuron-ls`` and ensure that you are
able to see all 16 Neuron devices in the output. If one or more devices
are missing please report the issue to [email protected] with the instance ID and a screen capture of ``neuron-ls`` output.
::
2021-Nov-11 15:33:20.0161 7912:7912 ERROR TDRV:tdrv_init_mla_phase1 Could not open the nd0
2021-Nov-11 15:33:20.0161 7912:7912 ERROR TDRV:tdrv_destroy_one_mla close device failed
2021-Nov-11 15:33:20.0161 7912:7912 ERROR TDRV:tdrv_destroy TDRV not initialized
2021-Nov-11 15:33:20.0161 7912:7912 ERROR NRT:nrt_init Failed to initialize devices, error:1
2021-11-11 15:33:20.161331: F ./tensorflow/compiler/xla/service/neuron/neuron_runtime.h:1033] Check failed: status == NRT_SUCCESS NEURONPOC : nrt_init failed. Status = 1
Multiworker execution hangs during NCCL init
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When your multi-worker execution hangs during NCCL init, you can try to
reserve the port used by environment variable ``NEURON_RT_ROOT_COMM_ID``
by (here we use host:port localhost:48620 as an example but you can use
any free port and root node’s host IP):
.. code:: bash
sudo sysctl -w net.ipv4.ip_local_reserved_ports=48620
Then set the environment variable ``NEURON_RT_ROOT_COMM_ID`` in your
script:
.. code:: python
os.environ["NEURON_RT_ROOT_COMM_ID"] = "localhost:48620"
.. _nrt-init-error-one-or-more-engines-are-running-please-restart-device-by-reloading-driver:
NRT init error “One or more engines are running. Please restart device by reloading driver”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see an error stating “One or more engines are running. Please
restart device by reloading driver” please follow the instruction and
reload the driver using
“\ ``sudo modprobe -r neuron; sudo modprobe neuron;``\ ”.
.. code:: bash
2021-Nov-15 20:23:27.0280 3793:3793 ERROR TDRV:tpb_eng_init_hals_v2 CRITICAL HW ERROR: One or more engines are running. Please restart device by reloading driver:
sudo modprobe -r neuron; sudo modprobe neuron;
2021-Nov-15 20:23:27.0280 3793:3793 ERROR TDRV:tdrv_init_one_mla_phase2 nd0 nc0 HAL init failed. error:1
NRT error “ERROR TDRV:kbl_model_add Attempting to load an incompatible model!”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see an NRT error “ERROR TDRV:kbl_model_add Attempting to load an
incompatible model!” this means that the compiler neuronx-cc used to
compile the model is too old. See installation instruction to update to
latest compiler.
NRT error "ERROR HAL:aws_hal_sprot_config_remap_entry SPROT remap destination address must be aligned size"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see an NRT error "ERROR HAL:aws_hal_sprot_config_remap_entry SPROT remap
destination address must be aligned size", please check the kernel version and upgrade it
to the distribution's latest kernel.
For example, on Ubuntu 18.04.6 LTS, the kernel version 4.15.0-66-generic is
known to cause this error when running MLP tutorial. This is due to a known
bug in the kernel in aligned memory allocation. To fix this issue, please
upgrade your kernel to latest version (i.e. 4.15.0-171-generic):
.. code:: shell
uname -a
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
Please reboot after the upgrade. Use "uname -a" to check kernel version again after reboot.
NCCL warning : "NCCL WARN Timeout waiting for RX (waited 120 sec) - retrying"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running multi-worker training, if a graph has collective communication operator like an
``all_reduce``, it requires all the workers involved in the collective communication to load the
graph in the runtime at approximately same time. If any of the worker doesn't load the graph
within a 120 sec window from the first model load by any of the worker, you would see warnings
like ``NCCL WARN Timeout waiting for RX (waited 120 sec) - retrying``. When you see such warnings
check for the following in the log messages:
1. One of the workers is compiling a graph: In multi-worker training, there is a chance that
each worker builds a slightly different graph. This would result in cache miss and can result
in compilation. Since the compilations during training run are serialized, the first worker
can compile and load the graph with collective communication. It would then wait for 120 secs
for other works to join. If they don't show up because they are compiling their own graphs,
first worker would start throwing a warning message as above. The warning in this case is
``non-fatal`` and would go away once all workers have compiled their respective graphs and then loaded
them. To identify this scenario, look for ``No candidate found under ....`` logs around the warning.
You should also see ``.....`` which indicates compilation is in progress.
2. Server on one of the nodes crashed: In distributed training across multiple nodes, if the server on one
node crashed, the workers on other nodes would keep waiting on model load and you would see above
``timeout`` logs on those nodes. To identify if the server crashed, check if you see the following
error on any of the nodes:
::
`RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1664146011.016500243","description":"Error received from peer ipv4:10.1.24.109:37379","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC`
If you see the above error, then it means there is a server crash and you need to cancel the
traning run.
RPC error: "RPC failed with status = 'UNAVAILABLE: Socket closed'"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you see the above error, it means that the xrt server crashed. When you see such an error, look for
the following:
1. Check for any error logs before the ``RPC error``. That should indicate the root cause of server crash.
Note: The actual error log might be buried because of all the ``RPC error`` logs that swamp the logs.
2. Sometimes the server can crash because of host OOM. This can happen when we are loading and saving checkpoints.
In such cases, you only see ``RPC errors`` and no other log. You can check if any instance is going out of memory
by using tools like `dmesg <https://man7.org/linux/man-pages/man1/dmesg.1.html>`_
Error "Assertion \`listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed" followed by 'RPC failed with status = "UNAVAILABLE: Connection reset by peer"'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The error "Assertion \`listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed" is intermittent and occurs when using glibc 2.26. To find out the glibc version you have, you can run ``ldd --version``. The workaround is to use Ubuntu 20 where glibc is 2.27.
.. code:: bash
INFO: Inconsistency detected by ld.so: ../elf/dl-tls.c: 488: _dl_allocate_tls_init: Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed!
INFO: 2022-10-03 02:16:04.488054: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1664763364.487962663","description":"Error received from peer ipv4:10.0.9.150:41677","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
RPC connection error: "RPC failed with status = UNAVAILABLE: Connection reset by peer" not preceded by any error
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This error may not be preceded by another error like shown in the previous section.
In this case, the RPC connection error usually happens when we do distributed training across multiple nodes. When you see such error, please
wait for a few minutes. It might be because some node is taking time to setup and hence the other node is not
able to connect to it just yet. Once, all nodes are up, training should resume.
Runtime errors "Missing infer_status notification" followed by "inference timeout"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you get a timeout error like below:
.. code:: bash
ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:4)
ERROR TDRV:exec_consume_infer_status_notifications (FATAL-RT-UNDEFINED-STATE) inference timeout (600000 ms) on Neuron Device 4 NC 1, waiting for execution completion notification
It maybe due to long graph execution time causing synchronization delays
exceeding the default timeout. Please try increasing the timeout to
larger value using ``NEURON_RT_EXEC_TIMEOUT`` (unit in seconds) and
see if the problem is resolved.
Protobuf Error "TypeError: Descriptors cannot not be created directly."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you install torch-neuronx after neuronx-cc, you may get the Protobuf error "TypeError: Descriptors cannot not be created directly.". To fix this, please reinstall neuronx-cc using "pip install --force-reinstall neuronx-cc".
.. code:: bash
Traceback (most recent call last):
File "./run_glue.py", line 570, in <module>
main()
File "./run_glue.py", line 478, in main
data_collator=data_collator,
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/trainer.py", line 399, in __init__
callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/trainer_callback.py", line 292, in __init__
self.add_callback(cb)
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/trainer_callback.py", line 309, in add_callback
cb = callback() if isinstance(callback, type) else callback
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/integrations.py", line 390, in __init__
from torch.utils.tensorboard import SummaryWriter # noqa: F401
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/torch/utils/tensorboard/__init__.py", line 10, in <module>
from .writer import FileWriter, SummaryWriter # noqa: F401
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 9, in <module>
from tensorboard.compat.proto.event_pb2 import SessionLog
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/event_pb2.py", line 17, in <module>
from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/summary_pb2.py", line 17, in <module>
from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/tensor_pb2.py", line 16, in <module>
from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/resource_handle_pb2.py", line 16, in <module>
from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/tensor_shape_pb2.py", line 42, in <module>
serialized_options=None, file=DESCRIPTOR),
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/google/protobuf/descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
TDRV error "Timestamp program stop timeout"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see TDRV error "Timestamp program stop timeout", i.e. when rerunning a training script after it was interrupted, try first reloading the driver using ``sudo modprobe -r neuron; sudo modprobe neuron;`` (make sure neuron-top and/or neuron-monitor are not running).
.. code:: bash
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tsync_wait_eng_stop nd0 nc0 Timestamp program stop timeout (1000 ms)
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tsync_wait_nc_stop nd0 nc0 Error while waiting for timestamp program to end on TPB eng 0
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tsync_timestamps_finish nd0 nc0 Failed to stop neuron core
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tdrv_tsync_timestamps nd0 nc0 Failed to end timestamp sync programs
2022-Aug-31 04:59:22.0768 117717:117717 ERROR TDRV:tdrv_destroy TDRV not initialized
2022-Aug-31 04:59:22.0768 117717:117717 ERROR NRT:nrt_init Failed to initialize devices, error:5
Compiler error "module 'numpy' has no attribute 'asscalar'"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you have a newer version of numpy in the Python environment, compilations may fail with the "error module 'numpy' has no attribute 'asscalar'".
Please note the neuronx-cc has the following dependency on numpy "numpy<=1.20.0,>=1.13.3". To workaround this error, please do "pip install --force-reinstall neuronx-cc" to reinstall neuronx-cc with the proper dependencies.
.. code:: base
ERROR 227874 [neuronx-cc]: ***************************************************************
ERROR 227874 [neuronx-cc]: An Internal Compiler Error has occurred
ERROR 227874 [neuronx-cc]: ***************************************************************
ERROR 227874 [neuronx-cc]:
ERROR 227874 [neuronx-cc]: Error message: module 'numpy' has no attribute 'asscalar'
ERROR 227874 [neuronx-cc]:
ERROR 227874 [neuronx-cc]: Error class: AttributeError
ERROR 227874 [neuronx-cc]: Error location: Unknown
ERROR 227874 [neuronx-cc]: Version information:
ERROR 227874 [neuronx-cc]: NeuronX Compiler version 2.1.0.76+2909d26a2
ERROR 227874 [neuronx-cc]:
ERROR 227874 [neuronx-cc]: HWM version 2.1.0.7-64eaede08
ERROR 227874 [neuronx-cc]: NEFF version Dynamic
ERROR 227874 [neuronx-cc]: TVM not available
ERROR 227874 [neuronx-cc]: NumPy version 1.23.3
ERROR 227874 [neuronx-cc]: MXNet not available
ERROR 227874 [neuronx-cc]:
Import errors 'generic_type: type "IrValue" is already registered!' or 'generic_type: type "XlaBuilder" is already registered!'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you encounter a PyTorch import error 'import _XLAC ... generic_type: type "IrValue" is already registered!' or 'import _XLAC ... generic_type: type "XlaBuilder" is already registered!', please check that TensorFlow and/or JAX are not installed in the Python environment. If they are installed, please uninstall them.
Import error "import _XLAC ImportError: <>/site-packages/_XLAC.cpython-38-x86_64-linux-gnu.so: undefined symbol"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you encounter a PyTorch import error "import _XLAC ImportError: <>/site-packages/_XLAC.cpython-38-x86_64-linux-gnu.so: undefined symbol" during execution, please check:
1. TensorFlow and/or JAX are not installed in the Python environment. If they are installed, please uninstall them.
2. The installed PyTorch (torch) package major/minor versions match the installed torch-neuronx package's major/minor versions (ie. 1.11). If they don't match, please install the version of PyTorch that matches torch-neuronx.
.. code:: bash
Traceback (most recent call last):
File "/opt/ml/mlp_train.py", line 11, in <module>
import torch_xla.core.xla_model as xm
File "/usr/local/lib/python3.8/site-packages/torch_xla/__init__.py", line 117, in <module>
import _XLAC
ImportError: /usr/local/lib/python3.8/site-packages/_XLAC.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNK3c1010TensorImpl7stridesEv
NaNs seen with transformers version >= 4.21.0 when running HF BERT fine-tuning or pretraining with XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When running HuggingFace BERT (any size) fine-tuning tutorial or pretraining tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you will see NaNs in the loss immediately at the first step. More details on the issue can be found at `pytorch/xla#4152 <https://github.com/pytorch/xla/issues/4152>`_. The workaround is to use 4.20.0 or earlier (the tutorials currently recommend version 4.15.0) or add ``transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16`` to the Python script.
.. _trn1_ubuntu_troubleshooting:
Network Connectivity Issue on trn1/trn1n 32xlarge with Ubuntu
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Description**
Ubuntu distributions have network connectivity issues when multiple interfaces are connected to the same subnet. trn1/trn1n 32xlarge comes with 8/16 network interfaces. (To launch trn1/trn1n with 8/16 interfaces please follow :ref:`here <setup-trn1-multi-node-execution>`)
AWS publishes a package that installs a helper service to address the issue. This service runs at the startup, creates the appropriate netplan files, updates the netplan and the the instance networking and terminates.
Note that the following fix is only required on instances launched using generic Ubuntu AMIs. Neuron AMIs and instances launched via ParalleCluster do not require the fix.
**Patch to fix networking on a multi-interface instance**
.. code:: bash
wget -O /tmp/aws-ubuntu-eni-helper.deb 'https://github.com/aws-samples/aws-efa-nccl-baseami-pipeline/blob/master/nvidia-efa-ami_base/networking/aws-ubuntu-eni-helper_0.3-1_all.deb?raw=true'
sudo apt install /tmp/aws-ubuntu-eni-helper.deb -y
sudo systemctl enable aws-ubuntu-eni-helper.service
sudo systemctl start aws-ubuntu-eni-helper.service
**How to apply the patch?**
The following steps could be followed to resolve this issue:
* Launch trn1.32xl from AWS console (starts with ``single interface``, does not suffer from the multi-interface issue)
* Apply the patch on this newly launched single-interface instance
* Create a new AMI from this instance
* Launch an 8 or 16 interface instance using that AMI.
.. note::
The patch installs and enables the service but does not run it. This is intentional. The service will run at the startup when the AMI is used to launch a multi-interface instance.
**FAQs**
.. note::
Neuron DLAMI has the patch installed, users are always encouraged to launch the instances using the DLAMI which does not require any fix. Please refer to the :ref:`Set Up Guide <setup-guide-index>` to know how to launch an instance using DLAMI.
"Too many open files" when running training job
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running a large model training with several workers, it can result in errors like the following.
.. code:: bash
2023-Jun-14 19:05:29.0312 4112959:4113326 [23] bootstrap.cc:106 CCOM WARN Call to accept failed : Too many open files
2023-Jun-14 19:05:29.0312 4112959:4113263 [14] include/socket.h:438 CCOM WARN Net : Socket creation failed : Too many open files
2023-Jun-14 19:05:29.0312 4112959:4113326 ERROR ENC:ncclBootstrapRecv failed neuronBootstrapRecv request to NCCL
2023-Jun-14 19:05:29.0312 4112959:4113249 [12] bootstrap.cc:106 CCOM WARN Call to accept failed : Too many open files
2023-Jun-14 19:05:29.0312 4112959:4113263 ERROR ENC:ncclBootstrapSend failed neuronBootstrapSend request to NCCL2023-Jun-14 19:05:29.03122023-Jun-14 19:05:29.0312 4112959:4113270 [15] bootstrap.cc:106 CCOM WARN Call to accept failed : Too many open files
This can result when the default OS limits is low. The hard and soft limits can be set on OS using the following commands or by manually opening and setting the limits.
.. code:: bash
sudo sed -i 'H;1h;$!d;x;/hard *nofile/!s/$/\n* hard nofile 65536/' /etc/security/limits.conf
sudo sed -i 'H;1h;$!d;x;/soft *nofile/!s/$/\n* soft nofile 65536/' /etc/security/limits.conf
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*soft\s*nofile\s*[0-9]\+$/\1 soft nofile 65536/' /etc/security/limits.conf
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*hard\s*nofile\s*[0-9]\+$/\1 hard nofile 65536/' /etc/security/limits.conf
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*soft\s*nofile\s*[0-9]\+$/\1 soft nofile 65536/' /etc/security/limits.d/01_efa.conf || true
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*hard\s*nofile\s*[0-9]\+$/\1 hard nofile 65536/' /etc/security/limits.d/01_efa.conf || true
The `01_efa.conf` file is created as part of the EFA installation and needs to be updated. If EFA driver is not installed the file `01_efa.conf` doesn't exist and the sed commands will fail with `No such file or directory`. If there are other files under `limits.d` with file limits they need to be updated as well.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _pytorch-neuron-traning-troubleshooting:
PyTorch Neuron (``torch-neuronx``) for Training Troubleshooting Guide
=====================================================================
.. contents:: Table of contents
:local:
:depth: 2
This document shows common issues users may encounter while using
PyTorch-Neuron and provides guidance how to resolve or work-around them.
General Troubleshooting
-----------------------
For setting up EFA that is needed for multi-node training, please see :ref:`setup-trn1-multi-node-execution`
For XLA-related troubleshooting notes see :ref:`How to debug models in PyTorch
Neuron <pytorch-neuronx-debug>`
and `PyTorch-XLA troubleshooting
guide <https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md>`__.
If your multi-worker training run is interrupted, you may need to kill
all the python processes (WARNING: this kills all python processes and
reload the driver):
.. code:: bash
killall -9 python
killall -9 python3
sudo rmmod neuron; sudo modprobe neuron
To turn on RT debug:
.. code:: python
os.environ["NEURON_RT_LOG_LEVEL"] = "INFO"
To turn on Neuron NCCL debug:
.. code:: python
os.environ["NCCL_DEBUG"] = "WARN"
os.environ["NCCL_DEBUG_SUBSYS"] = "ALL"
If some process crashed during training, you can enable core dumps using ``ulimit`` command:
.. code:: bash
ulimit -S -c unlimited
To see the type of signals that would cause core dumps, see https://www.man7.org/linux/man-pages/man7/signal.7.html.
Note that core dumps take significant amount of storage, so make sure there is enough free disk space before enabling core dumps.
On Ubuntu, if Apport is not running, core dump file name is by default "core" in the local directory. To change file location and name format, modify ``/proc/sys/kernel/core_pattern`` (see https://www.kernel.org/doc/html/latest/admin-guide/sysctl/kernel.html#core-pattern for pattern info). For example, to dump to /tmp with executable filename and process ID:
.. code:: bash
echo '/tmp/core.%e.%p' | sudo tee /proc/sys/kernel/core_pattern
For containers, install appropriate dependencies during docker build ("apt-get update && apt-get -y install build-essential gdb") and start the container with "--ulimit core=-1" to enable core dump and "-v /tmp/:/tmp/" to ensure core dumps to /tmp are preserved when container is stopped or deleted. Dependencies can also be installed after container is started.
On Ubuntu, core dumps can also handled by Apport which is disabled by default. To enable Apport, run ``sudo service apport start``. The ``/proc/sys/kernel/core_pattern`` is updated by Apport service. After a crash, look in /var/log/apport.log for the core dump file name, which should be in located in /var/lib/apport/coredump/.
Once you have the core dump, you can use gdb to debug further (for Python applications, <executable> is ``python`` or ``python3``):
.. code:: bash
gdb <executable> <core file>
If some process (i.e. XRT server) is killed due to out-of-memory on host (i.e. you see "Out of memory: Killed process <PID>" in syslog or dmesg), there won't be any core dump generated. However, you can change to it to kernel panic mode to trigger core dump by setting ``/proc/sys/vm/panic_on_oom`` to value of 1 on the host or from inside container.
On the host where you need ``sudo`` (this change will be reflected inside the container also):
.. code:: bash
echo 1 | sudo tee /proc/sys/vm/panic_on_oom
From inside container where ``sudo`` doesn't work (this change will be reflected on the host also):
.. code:: bash
echo 1 > /proc/sys/vm/panic_on_oom
Possible Error Conditions
-------------------------
Non-Fatal Error OpKernel ('op: "TPU*" device_type: "CPU"')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
During execution using PyTorch Neuron, you may see these non-fatal error messages:
.. code:: bash
E tensorflow/core/framework/op_kernel.cc:1676] OpKernel ('op: "TPURoundRobin" device_type: "CPU"') for unknown op: TPURoundRobin
E tensorflow/core/framework/op_kernel.cc:1676] OpKernel ('op: "TpuHandleToProtoKey" device_type: "CPU"') for unknown op: TpuHandleToProtoKey
They don't affect operation of the PyTorch Neuron and can be ignored.
XLA runtime error: "Invalid argument: Cannot assign a device for operation"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: bash
RuntimeError: tensorflow/compiler/xla/xla_client/xrt_computation_client.cc:490 : Check failed: session->session()->Run(session_work->feed_inputs, session_work->outputs_handles, &outputs) == ::tensorflow::Status::OK() (INVALID_ARGUMENT: Cannot assign a device for operation XRTAllocateFromTensor: {{node XRTAllocateFromTensor}} was explicitly assigned to /job:localservice/replica:0/task:0/device:TPU:0 but available devices are [ /job:localservice/replica:0/task:0/device:CPU:0, /job:localservice/replica:0/task:0/device:TPU_SYSTEM:0, /job:localservice/replica:0/task:0/device:XLA_CPU:0 ]. Make sure the device specification refers to a valid device.
[[XRTAllocateFromTensor]] vs. OK)
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::util::MultiWait::Complete(std::function<void ()> const&)
clone
*** End stack trace ***
The above error indicates that the framework was not able to initialize the neuron runtime. If you get
the above error, check for the following:
1. No other process is taking the neuron cores. If yes, you may have to kill that process.
2. If no process is running, try reloading the driver using ``sudo rmmod neuron; sudo modprobe neuron``
Error: “Could not start gRPC server”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you get “Could not start gRPC server” error, please check if there
are any leftover python processes from a previous interrupted run and
terminate them before restarting run.
.. code:: bash
E0207 17:22:12.592127280 30834 server_chttp2.cc:40] {"created":"@1644254532.592081429","description":"No address added out of total 1 resolved","file":"external/com_github_grpc_grpc/src/core/ext/t
ransport/chttp2/server/chttp2_server.cc","file_line":395,"referenced_errors":[{"created":"@1644254532.592078907","description":"Failed to add any wildcard listeners","file":"external/com_github_grpc_grpc/s
rc/core/lib/iomgr/tcp_server_posix.cc","file_line":342,"referenced_errors":[{"created":"@1644254532.592072626","description":"Unable to configure socket","fd":10,"file":"external/com_github_grpc_grpc/src/c
ore/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":216,"referenced_errors":[{"created":"@1644254532.592068939","description":"Address already in use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]},{"created":"@1644254532.592078512","description":"Unable to configure socket"
,"fd":10,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":216,"referenced_errors":[{"created":"@1644254532.592077123","description":"Address already in
use","errno":98,"file":"external/com_github_grpc_grpc/src/core/lib/iomgr/tcp_server_utils_posix_common.cc","file_line":189,"os_error":"Address already in use","syscall":"bind"}]}]}]}
2022-02-07 17:22:12.592170: E tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:545] Unknown: Could not start gRPC server
Failed compilation result in the cache
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All compilation results are by default saved in ``Neuron Persistent Cache``. If the Neuron Compiler
fails to compile a graph, we save the failed result in the cache. The reason for doing so is, if
the user tries to run the same script, we want the users to error out early rather than wait for
the compilation to progress and see an error at the later stage. However, there could be certain
cases under which a failed compilation may be do you some environment issues. One possible reason
of failure could be, during compilation the process went out of memory. This can happen if you are
running multiple processes in parallel such that not enough memory is available for compilation of
graph. Failure due to such reasons can be easily mitigated by re-running the compilation. In case,
you want to retry a failed compilation, you can do that by passing ``--retry_failed_compilation``
as follows:
.. code:: python
os.environ['NEURON_CC_FLAGS'] = os.environ.get('NEURON_CC_FLAGS', '') + ' --retry_failed_compilation'
This would retry the compilation and would replace a failed result in the cache with a
successful compilation result.
Compilation errors when placing NeuronCache home directory on NFS/EFS/FSx mounted drive
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Currently, NeuronCache default root directory is /var/tmp which is local to the instance you are running on. You can modify the location of the NeuronCache root directory using ``NEURON_CC_FLAGS='--cache_dir=<root dir>'``. However, when the NeuronCache directory is placed in a directory that is part of a NFS mounted drive shared among multiple instances, you may encounter file errors such as file not found, file corruption, or KeyError when running multi-instance training:
.. code:: bash
KeyError: 'neff_cache2/neuron-compile-cache/USER_neuroncc-1.0.48875.0+7437fbf18/MODULE_7223055628515330524/MODULE_0_SyncTensorsGraph.14_7223055628515330524_compute1-dy-kaena-training-2-1-e859998e-3035-5df63dab5ce63'
This is a result of limitations to file locking on NFS. EFS/FSx also exhibit similar limitation. The workaround is to setup separate NeuronCache root directories for each worker instance, such as ``NEURON_CC_FLAGS="--cache_dir=$HOME/neuron_cache/bert/`hostname`"``, where the home directory is shared among worker instances as in ParallelCluster.
Consider the use case of a ParallelCluster with SLURM cluster management. The home directory of the head node is shared via NFS with worker instances. Also, SLURM would terminate the idle worker instances when the cluster is configured as dynamic auto-scaling cluster, and the default cache in the terminated worker instance's /var/tmp is deleted. So to persist the cache across runs separated by a cluster idle period, we use the workaround above to create separate NeuronCache root directories for each worker instance. For example, see `BERT ParallelCluster script <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/training/dp_bert_hf_pretrain/run_dp_bert_large_hf_pretrain_bf16_s128.sh#L42>`__.
Compilation error: “Expect ap datatype to be of type float32 float16 bfloat16 uint8”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If an XLA example fails to run because of failed compilation and one of
the error messages is “Expect ap datatype to be of type float32 float16
bfloat16 uint8”, then please set the environment variable
``XLA_USE_32BIT_LONG=1`` in your script:
.. code:: python
os.environ['XLA_USE_32BIT_LONG'] = '1'
.. code:: bash
11/18/2021 04:51:25 PM WARNING 34567 [StaticProfiler]: matmul-based transposes inserted by penguin takes up 93.66 percent of all matmul computation
terminate called after throwing an instance of 'std::runtime_error'
what(): === BIR verification failed ===
Reason: Expect ap datatype to be of type float32 float16 bfloat16 uint8
Instruction: I-545-0
Opcode: Matmult
Input index: 0
Argument AP:
Access Pattern: [[1,8],[1,1],[1,1]]
Offset: 0
Memory Location: {compare.85-t604_i0}@SB<0,0>(8x2)#Internal DebugInfo: <compare.85||uint16||UNDEF||[8, 1, 1]>
NeuronCore(s) not available - Requested:1 Available:0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you see "NeuronCore(s) not available" please terminate processes
that may be holding the NeuronCores and terminate any neuron-top
sessions that are running. Also check if someone else is using the
system. Then do "sudo rmmod neuron; sudo modprobe neuron" to reload the
driver.
.. code:: bash
2021-Nov-15 15:21:28.0231 7245:7245 ERROR NRT:nrt_allocate_neuron_cores NeuronCore(s) not available - Requested:nc1-nc1 Available:0
2021-11-15 15:21:28.231864: F ./tensorflow/compiler/xla/service/neuron/neuron_runtime.h:1037] Check failed: status == NRT_SUCCESS NEURONPOC : nrt_init failed. Status = 1
Often when you run multi-worker training, there can be many python
processes leftover after a run is interrupted. To kill all python
processes, run the follow (WARNING: this kills all python processes on
the system) then reload the driver:
.. code:: bash
killall -9 python
killall -9 python3
sudo rmmod neuron; sudo modprobe neuron
TDRV error "TDRV:exec_consume_infer_status_notification"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see TDRV error "TDRV:exec_consume_infer_status_notification", try reloading the driver using ``sudo modprobe -r neuron; sudo modprobe neuron;``.
.. code:: bash
2022-Mar-10 18:51:19.07392022-Mar-10 18:51:19.0739 17821:17931 ERROR TDRV:exec_consume_infer_status_notifications 17822:18046 ERROR TDRV:exec_consume_infer_status_notifications Unexpected number of CC notifications: mod->cc_op_count=1, cc_start_cnt=0, cc_end_cnt=0Unexpected number of CC notifications: mod->cc_op_count=1, cc_start_cnt=0, cc_end_cnt=0
2022-Mar-10 18:51:19.07392022-Mar-10 18:51:19.0739 17821:17931 ERROR TDRV:exec_consume_infer_status_notifications 17822:18046 ERROR TDRV:exec_consume_infer_status_notifications (NON-FATAL, Ignoring) inference timeout (180000 ms) on Neuron Device 0 NC 0, waiting for cc status notifications.
(NON-FATAL, Ignoring) inference timeout (180000 ms) on Neuron Device 0 NC 1, waiting for cc status notifications.
TDRV error "TDRV:tdrv_one_tmpbuf_reserve Number of ONE TMPBUF pages requested exceeded the max number of pages allowed (requested: <N>, max allowed: 16)."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see the TDRV error "TDRV:tdrv_one_tmpbuf_reserve Number of ONE TMPBUF pages requested exceeded the max number of pages allowed (requested: <N>, max allowed: 16)", it maybe due to model tensors requiring more device memory then available. A solution is to try training with a smaller data batch size.
.. code:: bash
ERROR TDRV:tdrv_one_tmpbuf_reserve Number of ONE TMPBUF pages requested exceeded the max number of pages allowed (requested: 28, max allowed: 16).
ERROR TDRV:copy_and_stage_mr Failed to reserve one tmpbuf memory
ERROR TDRV:kbl_model_add copy_and_stage_mr() error
W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1669183391.155135683","description":"Error received from peer ipv4:172.31.58.24:43941","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC
Could not open the ndX, close device failed, TDRV not initialized
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see error messages stating “Could not open the ndX” (where X is
an integer from 0..15), please run ``neuron-ls`` and ensure that you are
able to see all 16 Neuron devices in the output. If one or more devices
are missing please report the issue to [email protected] with the instance ID and a screen capture of ``neuron-ls`` output.
::
2021-Nov-11 15:33:20.0161 7912:7912 ERROR TDRV:tdrv_init_mla_phase1 Could not open the nd0
2021-Nov-11 15:33:20.0161 7912:7912 ERROR TDRV:tdrv_destroy_one_mla close device failed
2021-Nov-11 15:33:20.0161 7912:7912 ERROR TDRV:tdrv_destroy TDRV not initialized
2021-Nov-11 15:33:20.0161 7912:7912 ERROR NRT:nrt_init Failed to initialize devices, error:1
2021-11-11 15:33:20.161331: F ./tensorflow/compiler/xla/service/neuron/neuron_runtime.h:1033] Check failed: status == NRT_SUCCESS NEURONPOC : nrt_init failed. Status = 1
Multiworker execution hangs during NCCL init
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When your multi-worker execution hangs during NCCL init, you can try to
reserve the port used by environment variable ``NEURON_RT_ROOT_COMM_ID``
by (here we use host:port localhost:48620 as an example but you can use
any free port and root node’s host IP):
.. code:: bash
sudo sysctl -w net.ipv4.ip_local_reserved_ports=48620
Then set the environment variable ``NEURON_RT_ROOT_COMM_ID`` in your
script:
.. code:: python
os.environ["NEURON_RT_ROOT_COMM_ID"] = "localhost:48620"
.. _nrt-init-error-one-or-more-engines-are-running-please-restart-device-by-reloading-driver:
NRT init error “One or more engines are running. Please restart device by reloading driver”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see an error stating “One or more engines are running. Please
restart device by reloading driver” please follow the instruction and
reload the driver using
“\ ``sudo modprobe -r neuron; sudo modprobe neuron;``\ ”.
.. code:: bash
2021-Nov-15 20:23:27.0280 3793:3793 ERROR TDRV:tpb_eng_init_hals_v2 CRITICAL HW ERROR: One or more engines are running. Please restart device by reloading driver:
sudo modprobe -r neuron; sudo modprobe neuron;
2021-Nov-15 20:23:27.0280 3793:3793 ERROR TDRV:tdrv_init_one_mla_phase2 nd0 nc0 HAL init failed. error:1
NRT error “ERROR TDRV:kbl_model_add Attempting to load an incompatible model!”
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see an NRT error “ERROR TDRV:kbl_model_add Attempting to load an
incompatible model!” this means that the compiler neuronx-cc used to
compile the model is too old. See installation instruction to update to
latest compiler.
NRT error "ERROR HAL:aws_hal_sprot_config_remap_entry SPROT remap destination address must be aligned size"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see an NRT error "ERROR HAL:aws_hal_sprot_config_remap_entry SPROT remap
destination address must be aligned size", please check the kernel version and upgrade it
to the distribution's latest kernel.
For example, on Ubuntu 18.04.6 LTS, the kernel version 4.15.0-66-generic is
known to cause this error when running MLP tutorial. This is due to a known
bug in the kernel in aligned memory allocation. To fix this issue, please
upgrade your kernel to latest version (i.e. 4.15.0-171-generic):
.. code:: shell
uname -a
sudo apt-get update
sudo apt-get upgrade
sudo apt-get dist-upgrade
Please reboot after the upgrade. Use "uname -a" to check kernel version again after reboot.
NCCL warning : "NCCL WARN Timeout waiting for RX (waited 120 sec) - retrying"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running multi-worker training, if a graph has collective communication operator like an
``all_reduce``, it requires all the workers involved in the collective communication to load the
graph in the runtime at approximately same time. If any of the worker doesn't load the graph
within a 120 sec window from the first model load by any of the worker, you would see warnings
like ``NCCL WARN Timeout waiting for RX (waited 120 sec) - retrying``. When you see such warnings
check for the following in the log messages:
1. One of the workers is compiling a graph: In multi-worker training, there is a chance that
each worker builds a slightly different graph. This would result in cache miss and can result
in compilation. Since the compilations during training run are serialized, the first worker
can compile and load the graph with collective communication. It would then wait for 120 secs
for other works to join. If they don't show up because they are compiling their own graphs,
first worker would start throwing a warning message as above. The warning in this case is
``non-fatal`` and would go away once all workers have compiled their respective graphs and then loaded
them. To identify this scenario, look for ``No candidate found under ....`` logs around the warning.
You should also see ``.....`` which indicates compilation is in progress.
2. Server on one of the nodes crashed: In distributed training across multiple nodes, if the server on one
node crashed, the workers on other nodes would keep waiting on model load and you would see above
``timeout`` logs on those nodes. To identify if the server crashed, check if you see the following
error on any of the nodes:
::
`RPC failed with status = "UNAVAILABLE: Socket closed" and grpc_error_string = "{"created":"@1664146011.016500243","description":"Error received from peer ipv4:10.1.24.109:37379","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket closed","grpc_status":14}", maybe retrying the RPC`
If you see the above error, then it means there is a server crash and you need to cancel the
traning run.
RPC error: "RPC failed with status = 'UNAVAILABLE: Socket closed'"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you see the above error, it means that the xrt server crashed. When you see such an error, look for
the following:
1. Check for any error logs before the ``RPC error``. That should indicate the root cause of server crash.
Note: The actual error log might be buried because of all the ``RPC error`` logs that swamp the logs.
2. Sometimes the server can crash because of host OOM. This can happen when we are loading and saving checkpoints.
In such cases, you only see ``RPC errors`` and no other log. You can check if any instance is going out of memory
by using tools like `dmesg <https://man7.org/linux/man-pages/man1/dmesg.1.html>`_
Error "Assertion \`listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed" followed by 'RPC failed with status = "UNAVAILABLE: Connection reset by peer"'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The error "Assertion \`listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed" is intermittent and occurs when using glibc 2.26. To find out the glibc version you have, you can run ``ldd --version``. The workaround is to use Ubuntu 20 where glibc is 2.27.
.. code:: bash
INFO: Inconsistency detected by ld.so: ../elf/dl-tls.c: 488: _dl_allocate_tls_init: Assertion `listp->slotinfo[cnt].gen <= GL(dl_tls_generation)' failed!
INFO: 2022-10-03 02:16:04.488054: W tensorflow/core/distributed_runtime/rpc/grpc_remote_master.cc:157] RPC failed with status = "UNAVAILABLE: Connection reset by peer" and grpc_error_string = "{"created":"@1664763364.487962663","description":"Error received from peer ipv4:10.0.9.150:41677","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Connection reset by peer","grpc_status":14}", maybe retrying the RPC
RPC connection error: "RPC failed with status = UNAVAILABLE: Connection reset by peer" not preceded by any error
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This error may not be preceded by another error like shown in the previous section.
In this case, the RPC connection error usually happens when we do distributed training across multiple nodes. When you see such error, please
wait for a few minutes. It might be because some node is taking time to setup and hence the other node is not
able to connect to it just yet. Once, all nodes are up, training should resume.
Runtime errors "Missing infer_status notification" followed by "inference timeout"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you get a timeout error like below:
.. code:: bash
ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:4)
ERROR TDRV:exec_consume_infer_status_notifications (FATAL-RT-UNDEFINED-STATE) inference timeout (600000 ms) on Neuron Device 4 NC 1, waiting for execution completion notification
It maybe due to long graph execution time causing synchronization delays
exceeding the default timeout. Please try increasing the timeout to
larger value using ``NEURON_RT_EXEC_TIMEOUT`` (unit in seconds) and
see if the problem is resolved.
Protobuf Error "TypeError: Descriptors cannot not be created directly."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you install torch-neuronx after neuronx-cc, you may get the Protobuf error "TypeError: Descriptors cannot not be created directly.". To fix this, please reinstall neuronx-cc using "pip install --force-reinstall neuronx-cc".
.. code:: bash
Traceback (most recent call last):
File "./run_glue.py", line 570, in <module>
main()
File "./run_glue.py", line 478, in main
data_collator=data_collator,
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/trainer.py", line 399, in __init__
callbacks, self.model, self.tokenizer, self.optimizer, self.lr_scheduler
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/trainer_callback.py", line 292, in __init__
self.add_callback(cb)
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/trainer_callback.py", line 309, in add_callback
cb = callback() if isinstance(callback, type) else callback
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/transformers/integrations.py", line 390, in __init__
from torch.utils.tensorboard import SummaryWriter # noqa: F401
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/torch/utils/tensorboard/__init__.py", line 10, in <module>
from .writer import FileWriter, SummaryWriter # noqa: F401
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/torch/utils/tensorboard/writer.py", line 9, in <module>
from tensorboard.compat.proto.event_pb2 import SessionLog
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/event_pb2.py", line 17, in <module>
from tensorboard.compat.proto import summary_pb2 as tensorboard_dot_compat_dot_proto_dot_summary__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/summary_pb2.py", line 17, in <module>
from tensorboard.compat.proto import tensor_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/tensor_pb2.py", line 16, in <module>
from tensorboard.compat.proto import resource_handle_pb2 as tensorboard_dot_compat_dot_proto_dot_resource__handle__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/resource_handle_pb2.py", line 16, in <module>
from tensorboard.compat.proto import tensor_shape_pb2 as tensorboard_dot_compat_dot_proto_dot_tensor__shape__pb2
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/tensorboard/compat/proto/tensor_shape_pb2.py", line 42, in <module>
serialized_options=None, file=DESCRIPTOR),
File "/home/ec2-user/aws_neuron_venv_pytorch_p37_exp/lib64/python3.7/site-packages/google/protobuf/descriptor.py", line 560, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
TDRV error "Timestamp program stop timeout"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you see TDRV error "Timestamp program stop timeout", i.e. when rerunning a training script after it was interrupted, try first reloading the driver using ``sudo modprobe -r neuron; sudo modprobe neuron;`` (make sure neuron-top and/or neuron-monitor are not running).
.. code:: bash
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tsync_wait_eng_stop nd0 nc0 Timestamp program stop timeout (1000 ms)
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tsync_wait_nc_stop nd0 nc0 Error while waiting for timestamp program to end on TPB eng 0
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tsync_timestamps_finish nd0 nc0 Failed to stop neuron core
2022-Aug-31 04:59:21.0546 117717:117717 ERROR TDRV:tdrv_tsync_timestamps nd0 nc0 Failed to end timestamp sync programs
2022-Aug-31 04:59:22.0768 117717:117717 ERROR TDRV:tdrv_destroy TDRV not initialized
2022-Aug-31 04:59:22.0768 117717:117717 ERROR NRT:nrt_init Failed to initialize devices, error:5
Compiler error "module 'numpy' has no attribute 'asscalar'"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you have a newer version of numpy in the Python environment, compilations may fail with the "error module 'numpy' has no attribute 'asscalar'".
Please note the neuronx-cc has the following dependency on numpy "numpy<=1.20.0,>=1.13.3". To workaround this error, please do "pip install --force-reinstall neuronx-cc" to reinstall neuronx-cc with the proper dependencies.
.. code:: base
ERROR 227874 [neuronx-cc]: ***************************************************************
ERROR 227874 [neuronx-cc]: An Internal Compiler Error has occurred
ERROR 227874 [neuronx-cc]: ***************************************************************
ERROR 227874 [neuronx-cc]:
ERROR 227874 [neuronx-cc]: Error message: module 'numpy' has no attribute 'asscalar'
ERROR 227874 [neuronx-cc]:
ERROR 227874 [neuronx-cc]: Error class: AttributeError
ERROR 227874 [neuronx-cc]: Error location: Unknown
ERROR 227874 [neuronx-cc]: Version information:
ERROR 227874 [neuronx-cc]: NeuronX Compiler version 2.1.0.76+2909d26a2
ERROR 227874 [neuronx-cc]:
ERROR 227874 [neuronx-cc]: HWM version 2.1.0.7-64eaede08
ERROR 227874 [neuronx-cc]: NEFF version Dynamic
ERROR 227874 [neuronx-cc]: TVM not available
ERROR 227874 [neuronx-cc]: NumPy version 1.23.3
ERROR 227874 [neuronx-cc]: MXNet not available
ERROR 227874 [neuronx-cc]:
Import errors 'generic_type: type "IrValue" is already registered!' or 'generic_type: type "XlaBuilder" is already registered!'
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you encounter a PyTorch import error 'import _XLAC ... generic_type: type "IrValue" is already registered!' or 'import _XLAC ... generic_type: type "XlaBuilder" is already registered!', please check that TensorFlow and/or JAX are not installed in the Python environment. If they are installed, please uninstall them.
Import error "import _XLAC ImportError: <>/site-packages/_XLAC.cpython-38-x86_64-linux-gnu.so: undefined symbol"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you encounter a PyTorch import error "import _XLAC ImportError: <>/site-packages/_XLAC.cpython-38-x86_64-linux-gnu.so: undefined symbol" during execution, please check:
1. TensorFlow and/or JAX are not installed in the Python environment. If they are installed, please uninstall them.
2. The installed PyTorch (torch) package major/minor versions match the installed torch-neuronx package's major/minor versions (ie. 1.11). If they don't match, please install the version of PyTorch that matches torch-neuronx.
.. code:: bash
Traceback (most recent call last):
File "/opt/ml/mlp_train.py", line 11, in <module>
import torch_xla.core.xla_model as xm
File "/usr/local/lib/python3.8/site-packages/torch_xla/__init__.py", line 117, in <module>
import _XLAC
ImportError: /usr/local/lib/python3.8/site-packages/_XLAC.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNK3c1010TensorImpl7stridesEv
NaNs seen with transformers version >= 4.21.0 when running HF BERT fine-tuning or pretraining with XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When running HuggingFace BERT (any size) fine-tuning tutorial or pretraining tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you will see NaNs in the loss immediately at the first step. More details on the issue can be found at `pytorch/xla#4152 <https://github.com/pytorch/xla/issues/4152>`_. The workaround is to use 4.20.0 or earlier (the tutorials currently recommend version 4.15.0) or add ``transformers.modeling_utils.get_parameter_dtype = lambda x: torch.bfloat16`` to the Python script.
.. _trn1_ubuntu_troubleshooting:
Network Connectivity Issue on trn1/trn1n 32xlarge with Ubuntu
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**Description**
Ubuntu distributions have network connectivity issues when multiple interfaces are connected to the same subnet. trn1/trn1n 32xlarge comes with 8/16 network interfaces. (To launch trn1/trn1n with 8/16 interfaces please follow :ref:`here <setup-trn1-multi-node-execution>`)
AWS publishes a package that installs a helper service to address the issue. This service runs at the startup, creates the appropriate netplan files, updates the netplan and the the instance networking and terminates.
Note that the following fix is only required on instances launched using generic Ubuntu AMIs. Neuron AMIs and instances launched via ParalleCluster do not require the fix.
**Patch to fix networking on a multi-interface instance**
.. code:: bash
wget -O /tmp/aws-ubuntu-eni-helper.deb 'https://github.com/aws-samples/aws-efa-nccl-baseami-pipeline/blob/master/nvidia-efa-ami_base/networking/aws-ubuntu-eni-helper_0.3-1_all.deb?raw=true'
sudo apt install /tmp/aws-ubuntu-eni-helper.deb -y
sudo systemctl enable aws-ubuntu-eni-helper.service
sudo systemctl start aws-ubuntu-eni-helper.service
**How to apply the patch?**
The following steps could be followed to resolve this issue:
* Launch trn1.32xl from AWS console (starts with ``single interface``, does not suffer from the multi-interface issue)
* Apply the patch on this newly launched single-interface instance
* Create a new AMI from this instance
* Launch an 8 or 16 interface instance using that AMI.
.. note::
The patch installs and enables the service but does not run it. This is intentional. The service will run at the startup when the AMI is used to launch a multi-interface instance.
**FAQs**
.. note::
Neuron DLAMI has the patch installed, users are always encouraged to launch the instances using the DLAMI which does not require any fix. Please refer to the :ref:`Set Up Guide <setup-guide-index>` to know how to launch an instance using DLAMI.
"Too many open files" when running training job
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running a large model training with several workers, it can result in errors like the following.
.. code:: bash
2023-Jun-14 19:05:29.0312 4112959:4113326 [23] bootstrap.cc:106 CCOM WARN Call to accept failed : Too many open files
2023-Jun-14 19:05:29.0312 4112959:4113263 [14] include/socket.h:438 CCOM WARN Net : Socket creation failed : Too many open files
2023-Jun-14 19:05:29.0312 4112959:4113326 ERROR ENC:ncclBootstrapRecv failed neuronBootstrapRecv request to NCCL
2023-Jun-14 19:05:29.0312 4112959:4113249 [12] bootstrap.cc:106 CCOM WARN Call to accept failed : Too many open files
2023-Jun-14 19:05:29.0312 4112959:4113263 ERROR ENC:ncclBootstrapSend failed neuronBootstrapSend request to NCCL2023-Jun-14 19:05:29.03122023-Jun-14 19:05:29.0312 4112959:4113270 [15] bootstrap.cc:106 CCOM WARN Call to accept failed : Too many open files
This can result when the default OS limits is low. The hard and soft limits can be set on OS using the following commands or by manually opening and setting the limits.
.. code:: bash
sudo sed -i 'H;1h;$!d;x;/hard *nofile/!s/$/\n* hard nofile 65536/' /etc/security/limits.conf
sudo sed -i 'H;1h;$!d;x;/soft *nofile/!s/$/\n* soft nofile 65536/' /etc/security/limits.conf
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*soft\s*nofile\s*[0-9]\+$/\1 soft nofile 65536/' /etc/security/limits.conf
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*hard\s*nofile\s*[0-9]\+$/\1 hard nofile 65536/' /etc/security/limits.conf
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*soft\s*nofile\s*[0-9]\+$/\1 soft nofile 65536/' /etc/security/limits.d/01_efa.conf || true
sudo sed -i 's/^#*\(\*\|\s*\*\)\s*hard\s*nofile\s*[0-9]\+$/\1 hard nofile 65536/' /etc/security/limits.d/01_efa.conf || true
The `01_efa.conf` file is created as part of the EFA installation and needs to be updated. If EFA driver is not installed the file `01_efa.conf` doesn't exist and the sed commands will fail with `No such file or directory`. If there are other files under `limits.d` with file limits they need to be updated as well.
</pre></body></html>
|
2023-09-29T20:54:49.146Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx-inference.rst.txt
|
```
.. _inference-tensorflow-neuronx:
Inference on Inf2 & Trn1/Trn1n (``tensorflow-neuronx``)
=================================================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx>
API Reference Guide </frameworks/tensorflow/tensorflow-neuronx/api-reference-guide>
Misc </frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx>
.. include:: tensorflow-neuronx-inference.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inference-tensorflow-neuronx:
Inference on Inf2 & Trn1/Trn1n (``tensorflow-neuronx``)
=================================================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx>
API Reference Guide </frameworks/tensorflow/tensorflow-neuronx/api-reference-guide>
Misc </frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx>
.. include:: tensorflow-neuronx-inference.txt</pre></body></html>
|
2023-09-29T20:54:49.189Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.ipynb.txt
|
```
{
"cells": [
{
"cell_type": "markdown",
"id": "e91cf83b",
"metadata": {},
"source": [
"# Running Huggingface Roberta-Base with TensorFlow-NeuronX"
]
},
{
"cell_type": "markdown",
"id": "71394e1e",
"metadata": {},
"source": [
"This tutorial demonstrates how to compile the Huggingface roberta-base model and infer on a trn1.2xlarge instance with \n",
"```tensorflow-neuronx```. To compile larger models like roberta-large, please consider using an inf2 instance."
]
},
{
"cell_type": "markdown",
"id": "828ef9bd",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "5becc549",
"metadata": {},
"source": [
"To run this tutorial please follow the instructions for [TensorFlow-NeuronX Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-neuronx-install.html) and the [Jupyter Notebook Quickstart](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html) and set your kernel to \"Python (tensorflow-neuronx)\".\n",
"\n",
"Next, install some additional dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee1a3b84",
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"!pip install transformers"
]
},
{
"cell_type": "markdown",
"id": "c301cfce",
"metadata": {},
"source": [
"## Download From Huggingface and Compile for AWS-Neuron"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92e8050d",
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_neuronx as tfnx\n",
"from transformers import RobertaTokenizer, TFRobertaModel\n",
"from transformers import BertTokenizer, TFBertModel\n",
"\n",
"# Create a wrapper for the roberta model that will accept inputs as a list\n",
"# instead of a dictionary. This will allow the compiled model to be saved\n",
"# to disk with the model.save() fucntion.\n",
"class RobertaWrapper(tf.keras.Model):\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.model = model\n",
" def __call__(self, example_inputs):\n",
" return self.model({'input_ids' : example_inputs[0], 'attention_mask' : example_inputs[1]})\n",
" \n",
"\n",
"tokenizer = RobertaTokenizer.from_pretrained('roberta-base')\n",
"model = RobertaWrapper(TFRobertaModel.from_pretrained('roberta-base'))\n",
"\n",
"batch_size = 16\n",
"\n",
"# create example inputs with a batch size of 16\n",
"text = [\"Paris is the <mask> of France.\"] * batch_size\n",
"encoded_input = tokenizer(text, return_tensors='tf', padding='max_length', max_length=64)\n",
"\n",
"# turn inputs into a list\n",
"example_input = [encoded_input['input_ids'], encoded_input['attention_mask']]\n",
"\n",
"#compile\n",
"model_neuron = tfnx.trace(model, example_input)\n",
"\n",
"print(\"Running on neuron:\", model_neuron(example_input))\n",
"\n",
"# save the model to disk to save recompilation time for next usage\n",
"model_neuron.save('./roberta-neuron-b16')"
]
},
{
"cell_type": "markdown",
"id": "0f2e159a",
"metadata": {},
"source": [
"## Run Basic Inference Benchmarking"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ccf22e74",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import concurrent.futures\n",
"import time\n",
"\n",
"reloaded_neuron_model = tf.keras.models.load_model('./roberta-neuron-b16')\n",
"print(\"Reloaded model running on neuron:\", reloaded_neuron_model(example_input))\n",
"\n",
"num_threads = 4\n",
"num_inferences = 1000\n",
"\n",
"latency_list = []\n",
"def inference_with_latency_calculation(example_input):\n",
" global latency_list\n",
" start = time.time()\n",
" result = reloaded_neuron_model(example_input)\n",
" end = time.time()\n",
" latency_list.append((end-start) * 1000)\n",
" return result\n",
"\n",
"start = time.time()\n",
"with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:\n",
" futures = []\n",
" for i in range(num_inferences):\n",
" futures.append(executor.submit(inference_with_latency_calculation, example_input))\n",
" for future in concurrent.futures.as_completed(futures):\n",
" get_result = future.result()\n",
"end = time.time()\n",
"\n",
"total_time = end - start\n",
"\n",
"print(f\"Throughput was {(num_inferences * batch_size)/total_time} samples per second.\")\n",
"print(f\"Latency p50 was {np.percentile(latency_list, 50)} ms\")\n",
"print(f\"Latency p90 was {np.percentile(latency_list, 90)} ms\")\n",
"print(f\"Latency p99 was {np.percentile(latency_list, 99)} ms\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (Neuron TensorFlow)",
"language": "python",
"name": "aws_neuron_venv_tf"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"id": "e91cf83b",
"metadata": {},
"source": [
"# Running Huggingface Roberta-Base with TensorFlow-NeuronX"
]
},
{
"cell_type": "markdown",
"id": "71394e1e",
"metadata": {},
"source": [
"This tutorial demonstrates how to compile the Huggingface roberta-base model and infer on a trn1.2xlarge instance with \n",
"```tensorflow-neuronx```. To compile larger models like roberta-large, please consider using an inf2 instance."
]
},
{
"cell_type": "markdown",
"id": "828ef9bd",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "5becc549",
"metadata": {},
"source": [
"To run this tutorial please follow the instructions for [TensorFlow-NeuronX Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuronx/setup/tensorflow-neuronx-install.html) and the [Jupyter Notebook Quickstart](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html) and set your kernel to \"Python (tensorflow-neuronx)\".\n",
"\n",
"Next, install some additional dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ee1a3b84",
"metadata": {},
"outputs": [],
"source": [
"%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect\n",
"!pip install transformers"
]
},
{
"cell_type": "markdown",
"id": "c301cfce",
"metadata": {},
"source": [
"## Download From Huggingface and Compile for AWS-Neuron"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "92e8050d",
"metadata": {},
"outputs": [],
"source": [
"import tensorflow as tf\n",
"import tensorflow_neuronx as tfnx\n",
"from transformers import RobertaTokenizer, TFRobertaModel\n",
"from transformers import BertTokenizer, TFBertModel\n",
"\n",
"# Create a wrapper for the roberta model that will accept inputs as a list\n",
"# instead of a dictionary. This will allow the compiled model to be saved\n",
"# to disk with the model.save() fucntion.\n",
"class RobertaWrapper(tf.keras.Model):\n",
" def __init__(self, model):\n",
" super().__init__()\n",
" self.model = model\n",
" def __call__(self, example_inputs):\n",
" return self.model({'input_ids' : example_inputs[0], 'attention_mask' : example_inputs[1]})\n",
" \n",
"\n",
"tokenizer = RobertaTokenizer.from_pretrained('roberta-base')\n",
"model = RobertaWrapper(TFRobertaModel.from_pretrained('roberta-base'))\n",
"\n",
"batch_size = 16\n",
"\n",
"# create example inputs with a batch size of 16\n",
"text = [\"Paris is the <mask> of France.\"] * batch_size\n",
"encoded_input = tokenizer(text, return_tensors='tf', padding='max_length', max_length=64)\n",
"\n",
"# turn inputs into a list\n",
"example_input = [encoded_input['input_ids'], encoded_input['attention_mask']]\n",
"\n",
"#compile\n",
"model_neuron = tfnx.trace(model, example_input)\n",
"\n",
"print(\"Running on neuron:\", model_neuron(example_input))\n",
"\n",
"# save the model to disk to save recompilation time for next usage\n",
"model_neuron.save('./roberta-neuron-b16')"
]
},
{
"cell_type": "markdown",
"id": "0f2e159a",
"metadata": {},
"source": [
"## Run Basic Inference Benchmarking"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ccf22e74",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import concurrent.futures\n",
"import time\n",
"\n",
"reloaded_neuron_model = tf.keras.models.load_model('./roberta-neuron-b16')\n",
"print(\"Reloaded model running on neuron:\", reloaded_neuron_model(example_input))\n",
"\n",
"num_threads = 4\n",
"num_inferences = 1000\n",
"\n",
"latency_list = []\n",
"def inference_with_latency_calculation(example_input):\n",
" global latency_list\n",
" start = time.time()\n",
" result = reloaded_neuron_model(example_input)\n",
" end = time.time()\n",
" latency_list.append((end-start) * 1000)\n",
" return result\n",
"\n",
"start = time.time()\n",
"with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:\n",
" futures = []\n",
" for i in range(num_inferences):\n",
" futures.append(executor.submit(inference_with_latency_calculation, example_input))\n",
" for future in concurrent.futures.as_completed(futures):\n",
" get_result = future.result()\n",
"end = time.time()\n",
"\n",
"total_time = end - start\n",
"\n",
"print(f\"Throughput was {(num_inferences * batch_size)/total_time} samples per second.\")\n",
"print(f\"Latency p50 was {np.percentile(latency_list, 50)} ms\")\n",
"print(f\"Latency p90 was {np.percentile(latency_list, 90)} ms\")\n",
"print(f\"Latency p99 was {np.percentile(latency_list, 99)} ms\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python (Neuron TensorFlow)",
"language": "python",
"name": "aws_neuron_venv_tf"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
</pre></body></html>
|
2023-09-29T20:54:49.229Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.rst.txt
|
```
API Reference Guide (``tensorflow-neuronx``)
===========================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api
/frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api
/frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api
.. include:: /frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide (``tensorflow-neuronx``)
===========================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api
/frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api
/frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api
.. include:: /frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.txt
</pre></body></html>
|
2023-09-29T20:54:49.268Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.rst.txt
|
```
.. _inference-tensorflow-neuronx-tutorials:
Tutorials (``tensorflow-neuronx``)
===================================
.. toctree::
:maxdepth: 1
:hidden:
HuggingFace Roberta-Base </src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.ipynb>
/frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores
.. include:: /frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inference-tensorflow-neuronx-tutorials:
Tutorials (``tensorflow-neuronx``)
===================================
.. toctree::
:maxdepth: 1
:hidden:
HuggingFace Roberta-Base </src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.ipynb>
/frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores
.. include:: /frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.txt</pre></body></html>
|
2023-09-29T20:54:49.298Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.rst.txt
|
```
.. _tfneuronx-ref-neuron-tracing-api:
TensorFlow 2.x (``tensorflow-neuronx``) Tracing API
============================================
The Neuron tracing API enables tracing TensorFlow 2.x models for deployment
on trn1 and inf2 AWS machine learning accelerators.
Method
------
``tensorflow_neuronx.trace``
Description
-----------
Trace a ``keras.Model`` or a Python callable that can be decorated by
``tf.function``, and return an AWS-Neuron-optimized ``keras.Model`` that
can execute on trn1 and inf2 AWS machine learning accelerators. Tracing is
ideal for ``keras.Model`` that accepts a list of ``tf.Tensor`` objects and
returns a list of ``tf.Tensor`` objects. It is expected that users will
provide example inputs, and the ``trace`` function will execute ``func``
symbolically and convert it to a ``keras.Model``.
The returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
The returned ``keras.Model`` has an ``.on_neuron_ratio`` attribute
which shows the percentage of ops mapped to neuron hardware. This calculation
ignores PlaceholerOp, IdentityOp, ReadVariableOp and NoOp.
Options can be passed to Neuron compiler via the environment variable
``NEURON_CC_FLAGS``. For example, the syntax
``env NEURON_CC_FLAGS="--workdir ./artifacts"`` directs the Neuron compiler to dump artifacts
in the artifacts directory for debugging. See :ref:`neuron-compiler-cli-reference-guide` for more
information about compiler options.
Arguments
---------
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **subgraph_builder_function:** (Optional) A callable with signature
``subgraph_builder_function(node : NodeDef) -> bool``
(``NodeDef`` is defined in tensorflow/core/framework/node_def.proto)
that is used as a call-back function to determine which part of
the tensorflow GraphDef given by tracing ``func`` will be placed on
Machine Learning Accelerators.
If ``subgraph_builder_function`` is not provided, then ``trace`` will
automatically place operations on Machine Learning Accelerators or
on CPU to maximize the execution efficiency.
If it is provided, and ``subgraph_builder_function(node)`` returns
``True``, and placing ``node`` on Machine Learning Accelerators
will not cause deadlocks during execution, then ``trace`` will place
``node`` on Machine Learning Accelerators. If
``subgraph_builder_function(node)`` returns ``False``, then ``trace``
will place ``node`` on CPU.
.. _tensorflow-neuronx-special-flags:
Special Flags
-------------
These are flags that get passed directly to the Neuron tracing API
(rather than the Neuron Compiler). The flags are still passed
via the environment variable ``NEURON_CC_FLAGS``.
- **workdir:** example usage - ``NEURON_CC_FLAGS='--workdir ./artifacts'``
will create a folder named artifacts in the current directory and
save artifacts that can be used for debug.
- **dynamic-batch-size:** example usage -
``NEURON_CC_FLAGS='--dynamic-batch-size'`` A flag to allow Neuron graphs to
consume variable sized batches of data. Dynamic sizing is restricted to the
0th dimension of a tensor.
- **extract-weights (EXPERIMENTAL):** example usage -
``NEURON_CC_FLAGS='--extract-weights trn1.2xlarge'`` will reduce the compiled
model's protobuf size by taking the weights out of the protobuf.
Useful for compiling large models that would exceed the 2GB protobuf
size limit. This feature is experimental. Model performance is not
guaranteed and the flag does not work in combination with
``--neuroncore-pipeline-cores``, ``--dynamic-batch-size``, models with
multiple NEFFs, and models that are 16GB or greater.
Compiles models for different neuron instances depending on the instance type passed.
Supports all trn1 and inf2 instance types except for trn1n.
Returns
-------
- An AWS-Neuron-optimized ``keras.Model``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow_neuronx as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
model_neuron = tfnx.trace(model, example_inputs) # trace
# check to see how much of the model was compiled successfully
print(model_neuron.on_neuron_ratio)
model_dir = './model_neuron'
model_neuron.save(model_dir)
model_neuron_reloaded = tf.keras.models.load_model(model_dir)
Example Usage with Manual Device Placement Using `subgraph_builder_function`
-------------
.. code:: python
import tensorflow as tf
import tensorflow_neuronx as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
reshape0 = tf.keras.layers.Reshape([1, 3])(dense0)
output0 = tf.keras.layers.Dense(2)(reshape0)
model = tf.keras.Model(inputs=[input0], outputs=[output0])
example_inputs = tf.random.uniform([1, 3])
def subgraph_builder_function(node):
return node.op == 'MatMul'
model_neuron = tfnx.trace(
model, example_inputs,
subgraph_builder_function=subgraph_builder_function,
)
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tfneuronx-ref-neuron-tracing-api:
TensorFlow 2.x (``tensorflow-neuronx``) Tracing API
============================================
The Neuron tracing API enables tracing TensorFlow 2.x models for deployment
on trn1 and inf2 AWS machine learning accelerators.
Method
------
``tensorflow_neuronx.trace``
Description
-----------
Trace a ``keras.Model`` or a Python callable that can be decorated by
``tf.function``, and return an AWS-Neuron-optimized ``keras.Model`` that
can execute on trn1 and inf2 AWS machine learning accelerators. Tracing is
ideal for ``keras.Model`` that accepts a list of ``tf.Tensor`` objects and
returns a list of ``tf.Tensor`` objects. It is expected that users will
provide example inputs, and the ``trace`` function will execute ``func``
symbolically and convert it to a ``keras.Model``.
The returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
The returned ``keras.Model`` has an ``.on_neuron_ratio`` attribute
which shows the percentage of ops mapped to neuron hardware. This calculation
ignores PlaceholerOp, IdentityOp, ReadVariableOp and NoOp.
Options can be passed to Neuron compiler via the environment variable
``NEURON_CC_FLAGS``. For example, the syntax
``env NEURON_CC_FLAGS="--workdir ./artifacts"`` directs the Neuron compiler to dump artifacts
in the artifacts directory for debugging. See :ref:`neuron-compiler-cli-reference-guide` for more
information about compiler options.
Arguments
---------
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **subgraph_builder_function:** (Optional) A callable with signature
``subgraph_builder_function(node : NodeDef) -> bool``
(``NodeDef`` is defined in tensorflow/core/framework/node_def.proto)
that is used as a call-back function to determine which part of
the tensorflow GraphDef given by tracing ``func`` will be placed on
Machine Learning Accelerators.
If ``subgraph_builder_function`` is not provided, then ``trace`` will
automatically place operations on Machine Learning Accelerators or
on CPU to maximize the execution efficiency.
If it is provided, and ``subgraph_builder_function(node)`` returns
``True``, and placing ``node`` on Machine Learning Accelerators
will not cause deadlocks during execution, then ``trace`` will place
``node`` on Machine Learning Accelerators. If
``subgraph_builder_function(node)`` returns ``False``, then ``trace``
will place ``node`` on CPU.
.. _tensorflow-neuronx-special-flags:
Special Flags
-------------
These are flags that get passed directly to the Neuron tracing API
(rather than the Neuron Compiler). The flags are still passed
via the environment variable ``NEURON_CC_FLAGS``.
- **workdir:** example usage - ``NEURON_CC_FLAGS='--workdir ./artifacts'``
will create a folder named artifacts in the current directory and
save artifacts that can be used for debug.
- **dynamic-batch-size:** example usage -
``NEURON_CC_FLAGS='--dynamic-batch-size'`` A flag to allow Neuron graphs to
consume variable sized batches of data. Dynamic sizing is restricted to the
0th dimension of a tensor.
- **extract-weights (EXPERIMENTAL):** example usage -
``NEURON_CC_FLAGS='--extract-weights trn1.2xlarge'`` will reduce the compiled
model's protobuf size by taking the weights out of the protobuf.
Useful for compiling large models that would exceed the 2GB protobuf
size limit. This feature is experimental. Model performance is not
guaranteed and the flag does not work in combination with
``--neuroncore-pipeline-cores``, ``--dynamic-batch-size``, models with
multiple NEFFs, and models that are 16GB or greater.
Compiles models for different neuron instances depending on the instance type passed.
Supports all trn1 and inf2 instance types except for trn1n.
Returns
-------
- An AWS-Neuron-optimized ``keras.Model``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow_neuronx as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
model_neuron = tfnx.trace(model, example_inputs) # trace
# check to see how much of the model was compiled successfully
print(model_neuron.on_neuron_ratio)
model_dir = './model_neuron'
model_neuron.save(model_dir)
model_neuron_reloaded = tf.keras.models.load_model(model_dir)
Example Usage with Manual Device Placement Using `subgraph_builder_function`
-------------
.. code:: python
import tensorflow as tf
import tensorflow_neuronx as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
reshape0 = tf.keras.layers.Reshape([1, 3])(dense0)
output0 = tf.keras.layers.Dense(2)(reshape0)
model = tf.keras.Model(inputs=[input0], outputs=[output0])
example_inputs = tf.random.uniform([1, 3])
def subgraph_builder_function(node):
return node.op == 'MatMul'
model_neuron = tfnx.trace(
model, example_inputs,
subgraph_builder_function=subgraph_builder_function,
)
</pre></body></html>
|
2023-09-29T20:54:49.352Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.rst.txt
|
```
.. _tf-neuronx-ref-auto-replication-python-api:
TensorFlow Neuron (``tensorflow-neuronx``) Auto Multicore Replication (Experimental)
===================================================================================
The Neuron auto multicore replication Python API enables modifying TensorFlow 2.x
models trace by ```tensorflow_neuronx.trace``` so that they can be automatically replicated across multiple cores.
.. contents:: Table of contents
:local:
:depth: 1
TensorFlow Neuron TF 2.x (``tensorflow-neuron TF2.x``) Auto Multicore Replication Python API (Experimental)
-----------------------------------------------------------------------------------------------------------
Method
^^^^^^
``tensorflow.neuron.auto_multicore``
on models traced by
``tensorflow_neuronx.trace``
Description
^^^^^^^^^^^
Converts an existing AWS-Neuron-optimized ``keras.Model`` and returns an auto-replication tagged
AWS-Multicore-Neuron-optimized ``keras.Model`` that can execute on AWS Machine Learning Accelerators.
Like the traced model, the returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The auto model replication feature in TensorFlow-Neuron enables you to
create a model once and the model parallel replication would happen
automatically. The desired number of cores can be less than the total available NeuronCores
on an trn1 or inf2 instance but not less than 1. This reduces framework memory usage as you are not
loading the same model multiple times manually. Calls to the returned model will execute the call
on each core in a round-robin fashion.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
Note that the automatic replication will only work on models compiled with pipeline size 1:
via ``--neuroncore-pipeline-cores=1``. If auto replication is not enabled, the model will default to
replicate on up to 4 cores.
See :ref:`neuron-compiler-cli-reference-guide` for more information about compiler options.
Arguments
^^^^^^^^^
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **num_cores:** The desired number of cores where the model will be automatically
replicated across
Returns
^^^^^^^
- An AWS-Multicore-Neuron-optimized ``keras.Model``.
Example Python API Usage for TF2.x traced models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
import tensorflow as tf
import tensorflow.neuron as tfn
import tensorflow_neuronx as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
inputs = [input0]
outputs = [dense0]
model = tf.keras.Model(inputs=inputs, outputs=outputs)
input0_tensor = tf.random.uniform([1, 3])
model_neuron = tfnx.trace(model, input0_tensor)
# a trn1.2xlarge has 2 neuron cores
num_cores = 2
multicore_model = tfn.auto_multicore(model_neuron, input0_tensor, num_cores=num_cores)
multicore_model(input0_tensor)
Example Python API Usage for TF2.x saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
from tensorflow.python import saved_model
input0_tensor = tf.random.uniform([1, 3])
num_cores = 4
reload_model = saved_model.load(model_dir)
multicore_model = tfn.auto_multicore(reload_model, input0_tensor, num_cores=num_cores)
.. _tensorflow-ref-auto-replication-cli-api:
TensorFlow Neuron TF2.x (``tensorflow-neuronx TF2.x``) Auto Multicore Replication CLI (Experimental)
---------------------------------------------------------------------------------------------------------------
The Neuron auto multicore replication CLI enables modifying Tensorflow 2.x
traced saved models so that they can be automatically replicated across multiple cores. By performing
this call on Tensorflow Saved Models, we can support Tensorflow-Serving
without significant modifications to the code.
Method
^^^^^^
``tf-neuron-auto-multicore MODEL_DIR --num_cores NUM_CORES --new_model_dir NEW_MODEL_DIR``
Arguments
^^^^^^^^^
- **MODEL_DIR:** The directory of a saved AWS-Neuron-optimized ``keras.Model``.
- **NUM_CORES:** The desired number of cores where the model will be automatically
replicated across
- **NEW_MODEL_DIR:** The directory of where the AWS-Multicore-Neuron-optimized
``keras.Model`` will be saved
Example CLI Usage for Tensorflow-Serving saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
tf-neuron-auto-multicore ./resnet --num_cores 8 --new_model_dir ./modified_resnet
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tf-neuronx-ref-auto-replication-python-api:
TensorFlow Neuron (``tensorflow-neuronx``) Auto Multicore Replication (Experimental)
===================================================================================
The Neuron auto multicore replication Python API enables modifying TensorFlow 2.x
models trace by ```tensorflow_neuronx.trace``` so that they can be automatically replicated across multiple cores.
.. contents:: Table of contents
:local:
:depth: 1
TensorFlow Neuron TF 2.x (``tensorflow-neuron TF2.x``) Auto Multicore Replication Python API (Experimental)
-----------------------------------------------------------------------------------------------------------
Method
^^^^^^
``tensorflow.neuron.auto_multicore``
on models traced by
``tensorflow_neuronx.trace``
Description
^^^^^^^^^^^
Converts an existing AWS-Neuron-optimized ``keras.Model`` and returns an auto-replication tagged
AWS-Multicore-Neuron-optimized ``keras.Model`` that can execute on AWS Machine Learning Accelerators.
Like the traced model, the returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The auto model replication feature in TensorFlow-Neuron enables you to
create a model once and the model parallel replication would happen
automatically. The desired number of cores can be less than the total available NeuronCores
on an trn1 or inf2 instance but not less than 1. This reduces framework memory usage as you are not
loading the same model multiple times manually. Calls to the returned model will execute the call
on each core in a round-robin fashion.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
Note that the automatic replication will only work on models compiled with pipeline size 1:
via ``--neuroncore-pipeline-cores=1``. If auto replication is not enabled, the model will default to
replicate on up to 4 cores.
See :ref:`neuron-compiler-cli-reference-guide` for more information about compiler options.
Arguments
^^^^^^^^^
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **num_cores:** The desired number of cores where the model will be automatically
replicated across
Returns
^^^^^^^
- An AWS-Multicore-Neuron-optimized ``keras.Model``.
Example Python API Usage for TF2.x traced models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
import tensorflow as tf
import tensorflow.neuron as tfn
import tensorflow_neuronx as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
inputs = [input0]
outputs = [dense0]
model = tf.keras.Model(inputs=inputs, outputs=outputs)
input0_tensor = tf.random.uniform([1, 3])
model_neuron = tfnx.trace(model, input0_tensor)
# a trn1.2xlarge has 2 neuron cores
num_cores = 2
multicore_model = tfn.auto_multicore(model_neuron, input0_tensor, num_cores=num_cores)
multicore_model(input0_tensor)
Example Python API Usage for TF2.x saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
from tensorflow.python import saved_model
input0_tensor = tf.random.uniform([1, 3])
num_cores = 4
reload_model = saved_model.load(model_dir)
multicore_model = tfn.auto_multicore(reload_model, input0_tensor, num_cores=num_cores)
.. _tensorflow-ref-auto-replication-cli-api:
TensorFlow Neuron TF2.x (``tensorflow-neuronx TF2.x``) Auto Multicore Replication CLI (Experimental)
---------------------------------------------------------------------------------------------------------------
The Neuron auto multicore replication CLI enables modifying Tensorflow 2.x
traced saved models so that they can be automatically replicated across multiple cores. By performing
this call on Tensorflow Saved Models, we can support Tensorflow-Serving
without significant modifications to the code.
Method
^^^^^^
``tf-neuron-auto-multicore MODEL_DIR --num_cores NUM_CORES --new_model_dir NEW_MODEL_DIR``
Arguments
^^^^^^^^^
- **MODEL_DIR:** The directory of a saved AWS-Neuron-optimized ``keras.Model``.
- **NUM_CORES:** The desired number of cores where the model will be automatically
replicated across
- **NEW_MODEL_DIR:** The directory of where the AWS-Multicore-Neuron-optimized
``keras.Model`` will be saved
Example CLI Usage for Tensorflow-Serving saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
tf-neuron-auto-multicore ./resnet --num_cores 8 --new_model_dir ./modified_resnet
</pre></body></html>
|
2023-09-29T20:54:49.577Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.rst.txt
|
```
Misc (``tensorflow-neuronx``)
============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx
.. include:: /frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (``tensorflow-neuronx``)
============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx
.. include:: /frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.txt
</pre></body></html>
|
2023-09-29T20:54:49.648Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.rst.txt
|
```
.. _tf-neuronx-ref-analyze-model-api:
TensorFlow 2.x (``tensorflow-neuronx``) analyze_model API
========================================================
Method
------
``tensorflow_neuronx.analyze_model``
Description
-----------
Analyzes a ``keras.Model`` or a Python callable that can be decorated by
``tf.function`` for it's compatibility with Neuron. It displays supported
vs. unsupported operators in the model as well as percentages and counts of
each operator and returns a dictionary with operator statistics.
Arguments
---------
- **func:** The ``keras.Model`` or function to be analyzed.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
Returns
-------
- A results ``dict`` with these keys: ``'percent_supported', 'supported_count',
'total_count', 'supported_operators', 'unsupported_operators', 'operators',
'operator_count'``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow_neuron as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
results = tfnx.analyze_model(model, example_inputs)
print(results)
# expected output
'''
BiasAdd
MatMul
100.00% of all operations (2 of 2) are supported
{'percent_supported': 100.0, 'supported_count': 2, 'total_count': 2,
'supported_operators': {'BiasAdd', 'MatMul'}, 'unsupported_operators': [],
'operators': ['BiasAdd', 'MatMul'], 'operator_count': {'MatMul': 1, 'BiasAdd': 1}}
'''
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tf-neuronx-ref-analyze-model-api:
TensorFlow 2.x (``tensorflow-neuronx``) analyze_model API
========================================================
Method
------
``tensorflow_neuronx.analyze_model``
Description
-----------
Analyzes a ``keras.Model`` or a Python callable that can be decorated by
``tf.function`` for it's compatibility with Neuron. It displays supported
vs. unsupported operators in the model as well as percentages and counts of
each operator and returns a dictionary with operator statistics.
Arguments
---------
- **func:** The ``keras.Model`` or function to be analyzed.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
Returns
-------
- A results ``dict`` with these keys: ``'percent_supported', 'supported_count',
'total_count', 'supported_operators', 'unsupported_operators', 'operators',
'operator_count'``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow_neuron as tfnx
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
results = tfnx.analyze_model(model, example_inputs)
print(results)
# expected output
'''
BiasAdd
MatMul
100.00% of all operations (2 of 2) are supported
{'percent_supported': 100.0, 'supported_count': 2, 'total_count': 2,
'supported_operators': {'BiasAdd', 'MatMul'}, 'unsupported_operators': [],
'operators': ['BiasAdd', 'MatMul'], 'operator_count': {'MatMul': 1, 'BiasAdd': 1}}
'''
</pre></body></html>
|
2023-09-29T20:54:49.739Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.rst.txt
|
```
Tutorials (``tensorflow-neuron``)
===================================
.. toctree::
:maxdepth: 1
:hidden:
Computer Vision Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision>
Natural Language Processing (NLP) Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp>
Utilizing Neuron Capabilities Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities>
.. include:: /frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Tutorials (``tensorflow-neuron``)
===================================
.. toctree::
:maxdepth: 1
:hidden:
Computer Vision Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision>
Natural Language Processing (NLP) Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp>
Utilizing Neuron Capabilities Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities>
.. include:: /frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.txt</pre></body></html>
|
2023-09-29T20:54:49.816Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.rst.txt
|
```
.. _tensorflow-neuronx-release-notes:
TensorFlow Neuron (``tensorflow-neuronx``) Release Notes
========================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the tensorflow-neuronx 2.x packages.
tensorflow-neuronx 2.x release [2.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 09/15/2023
* Minor updates
Date: 05/1/2023
* Added support for tracing models larger than 2 GB through the environment variable ``NEURON_CC_FLAGS='--extract-weights INSTANCE_TYPE'`` for all trn1 and inf2 instance types.
* tensorflow-neuronx now supports tensorflow 2.7, 2.8, and 2.9 (In addition to the already supported 2.10).
* Neuron release 2.10 release will be the last release that will include support for tensorflow-neuronx version 2.7. Future Neuron releases will not include tensorflow-neuronx version 2.7.
tensorflow-neuronx 2.10 release [2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/28/2023
The second release of tensorflow-neuronx 2.10 includes the following features:
* Dynamic batching
The following features are not included in this release:
* Support for tracing models larger than 2 GB
tensorflow-neuronx 2.10 release [1.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 2/24/2023
The initial release of tensorflow-neuronx 2.10 includes the following features:
* Initial support for TensorFlow 2.10 inference on Inf2 and Trn1
* Trace API (tensorflow_neuronx.trace)
* Automatic partitioning of model into CPU vs NeuronCore parts
* Automatic data parallel on multiple NeuronCores (experimental)
* Python 3.7, 3.8 and 3.9 support
* HuggingFace Roberta tutorial
The following features are not included in this release:
* Dynamic batching
* Support for tracing models larger than 2 GB
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-neuronx-release-notes:
TensorFlow Neuron (``tensorflow-neuronx``) Release Notes
========================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the tensorflow-neuronx 2.x packages.
tensorflow-neuronx 2.x release [2.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 09/15/2023
* Minor updates
Date: 05/1/2023
* Added support for tracing models larger than 2 GB through the environment variable ``NEURON_CC_FLAGS='--extract-weights INSTANCE_TYPE'`` for all trn1 and inf2 instance types.
* tensorflow-neuronx now supports tensorflow 2.7, 2.8, and 2.9 (In addition to the already supported 2.10).
* Neuron release 2.10 release will be the last release that will include support for tensorflow-neuronx version 2.7. Future Neuron releases will not include tensorflow-neuronx version 2.7.
tensorflow-neuronx 2.10 release [2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/28/2023
The second release of tensorflow-neuronx 2.10 includes the following features:
* Dynamic batching
The following features are not included in this release:
* Support for tracing models larger than 2 GB
tensorflow-neuronx 2.10 release [1.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 2/24/2023
The initial release of tensorflow-neuronx 2.10 includes the following features:
* Initial support for TensorFlow 2.10 inference on Inf2 and Trn1
* Trace API (tensorflow_neuronx.trace)
* Automatic partitioning of model into CPU vs NeuronCore parts
* Automatic data parallel on multiple NeuronCores (experimental)
* Python 3.7, 3.8 and 3.9 support
* HuggingFace Roberta tutorial
The following features are not included in this release:
* Dynamic batching
* Support for tracing models larger than 2 GB
</pre></body></html>
|
2023-09-29T20:54:49.823Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.rst.txt
|
```
Computer Vision Tutorials (``tensorflow-neuron``)
=================================================
* Tensorflow 1.x - OpenPose tutorial :ref:`[html] </src/examples/tensorflow/openpose_demo/openpose.ipynb>` :github:`[notebook] </src/examples/tensorflow/openpose_demo/openpose.ipynb>`
* Tensorflow 1.x - ResNet-50 tutorial :ref:`[html] </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>` :github:`[notebook] </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>`
* Tensorflow 1.x - YOLOv4 tutorial :ref:`[html] <tensorflow-yolo4>` :github:`[notebook] </src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb>`
* Tensorflow 1.x - YOLOv3 tutorial :ref:`[html] </src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb>` :github:`[notebook] </src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb>`
* Tensorflow 1.x - SSD300 tutorial :ref:`[html] <tensorflow-ssd300>`
* Tensorflow 1.x - Keras ResNet-50 optimization tutorial :ref:`[html] </src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb>` :github:`[notebook] </src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb>`
.. toctree::
:hidden:
/src/examples/tensorflow/openpose_demo/openpose.ipynb
/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo
/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo
/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Computer Vision Tutorials (``tensorflow-neuron``)
=================================================
* Tensorflow 1.x - OpenPose tutorial :ref:`[html] </src/examples/tensorflow/openpose_demo/openpose.ipynb>` :github:`[notebook] </src/examples/tensorflow/openpose_demo/openpose.ipynb>`
* Tensorflow 1.x - ResNet-50 tutorial :ref:`[html] </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>` :github:`[notebook] </src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb>`
* Tensorflow 1.x - YOLOv4 tutorial :ref:`[html] <tensorflow-yolo4>` :github:`[notebook] </src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb>`
* Tensorflow 1.x - YOLOv3 tutorial :ref:`[html] </src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb>` :github:`[notebook] </src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb>`
* Tensorflow 1.x - SSD300 tutorial :ref:`[html] <tensorflow-ssd300>`
* Tensorflow 1.x - Keras ResNet-50 optimization tutorial :ref:`[html] </src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb>` :github:`[notebook] </src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb>`
.. toctree::
:hidden:
/src/examples/tensorflow/openpose_demo/openpose.ipynb
/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo
/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb
/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo
/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb</pre></body></html>
|
2023-09-29T20:54:49.926Z
|
|
Introducing New Neuron GitHub Repositories — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/announcements/neuron2.x/github-changes.html
|
# Introducing New Neuron GitHub Repositories — AWS Neuron Documentation
## Introducing New Neuron GitHub Repositories[#](#introducing-new-neuron-github-repositories "Permalink to this headline")
Starting with Neuron release 2.3, Neuron Github repositories will be migrated to the new [AWS Neuron GitHub Organization](https://github.com/aws-neuron).
The new AWS Neuron GitHub Organization will include the [Neuron SDK GitHub](https://github.com/aws-neuron/aws-neuron-sdk) repository and will include the following additional new GitHub repositories:
AWS Neuron GitHub Organization[#](#id1 "Permalink to this table")
|
New GitHub repository
|
Description
|
| --- | --- |
|
[AWS Neuron Samples](https://github.com/aws-neuron/aws-neuron-samples)
|
Repository that hosts examples and scripts used in the Neuron documentation tutorials
|
|
[AWS Neuron Reference for Megatron-LM](https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm)
|
Repository that hosts Neuron support for Megatron-LM
|
|
[AWS Neuron Samples for AWS ParallelCluster](https://github.com/aws-neuron/aws-neuron-parallelcluster-samples)
|
Repository that hosts Neuron support for AWS ParallelCluster
|
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Introducing New Neuron GitHub Repositories — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/announcements/neuron2.x/github-changes", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/announcements/neuron2.x/github-changes.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/announcements/neuron2.x/github-changes.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/general/announcements/neuron2.x/github-changes.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Introducing New Neuron GitHub Repositories</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="introducing-new-neuron-github-repositories">
<span id="announce-aws-neuron-github-org"></span><h1>Introducing New Neuron GitHub Repositories<a class="headerlink" href="#introducing-new-neuron-github-repositories" title="Permalink to this headline">#</a></h1>
<p>Starting with <span class="xref std std-ref">Neuron release 2.3</span>, Neuron Github repositories will be migrated
to the new <a class="reference external" href="https://github.com/aws-neuron">AWS Neuron GitHub Organization</a>.</p>
<p>The new AWS Neuron GitHub Organization will include the <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk">Neuron SDK GitHub</a> repository and will include the following additional new GitHub repositories:</p>
<table class="colwidths-auto table-smaller-font-size table" id="id1">
<caption><span class="caption-text">AWS Neuron GitHub Organization</span><a class="headerlink" href="#id1" title="Permalink to this table">#</a></caption>
<thead>
<tr class="row-odd"><th class="head"><p>New GitHub repository</p></th>
<th class="head"><p>Description</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p><a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples">AWS Neuron Samples</a></p></td>
<td><p>Repository that hosts examples and scripts used in the Neuron documentation tutorials</p></td>
</tr>
<tr class="row-odd"><td><p><a class="reference external" href="https://github.com/aws-neuron/aws-neuron-reference-for-megatron-lm">AWS Neuron Reference for Megatron-LM</a></p></td>
<td><p>Repository that hosts Neuron support for Megatron-LM</p></td>
</tr>
<tr class="row-even"><td><p><a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">AWS Neuron Samples for AWS ParallelCluster</a></p></td>
<td><p>Repository that hosts Neuron support for AWS ParallelCluster</p></td>
</tr>
</tbody>
</table>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:50.247Z
|
Support — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/support.html
|
# Support — AWS Neuron Documentation
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
## Support[#](#support "Permalink to this headline")
- [SDK Maintenance Policy](sdk-policy.html)
- [Security Disclosures](security.html)
- [Contact Us](contact.html)
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Support — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../_static/pygments.css">
<link rel="stylesheet" href="../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
<script src="../_static/jquery.js"></script>
<script src="../_static/underscore.js"></script>
<script src="../_static/doctools.js"></script>
<script src="../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../_static/contentui.js"></script>
<script src="../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../genindex.html">
<link rel="search" title="Search" href="../search.html">
<link rel="next" title="SDK Maintenance Policy" href="sdk-policy.html">
<link rel="prev" title="Roadmap" href="roadmap-readme.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/support", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="current reference internal" href="#">
Support
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/support.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/support.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../_sources/general/support.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Support</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="support">
<span id="neuron-support"></span><h1>Support<a class="headerlink" href="#support" title="Permalink to this headline">#</a></h1>
<div class="toctree-wrapper compound">
<ul>
<li class="toctree-l1"><a class="reference internal" href="sdk-policy.html">SDK Maintenance Policy</a></li>
<li class="toctree-l1"><a class="reference internal" href="security.html">Security Disclosures</a></li>
<li class="toctree-l1"><a class="reference internal" href="contact.html">Contact Us</a></li>
</ul>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="roadmap-readme.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Roadmap</p>
</div>
</a>
<a class="right-next" id="next-link" href="sdk-policy.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">SDK Maintenance Policy</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:50.573Z
|
Tensorflow ResNet 50 Optimization Tutorial — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html
|
# Tensorflow ResNet 50 Optimization Tutorial — AWS Neuron Documentation
## Note: this tutorial runs on tensorflow-neuron 1.x only[#](#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only "Permalink to this headline")
## Install Dependencies[#](#Install-Dependencies "Permalink to this headline")
```
!pip install pillow requests # Necessary for loading images
!pip install 'tensorflow-neuron<2' --extra-index-url=https://pip.repos.neuron.amazonaws.com
```
## Compile[#](#Compile "Permalink to this headline")
The following example shows how to compile a FP16 ResNet50 network using various batching parameters to find the optimal solution. On inf1.6xlarge, run through the following steps to get a optimized Resnet 50 model. First, extract Keras ResNet50 FP32 (resnet50\_fp32\_keras.pb will be generated):
```
import re
import argparse
import tensorflow as tf
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from google.protobuf import text_format
import tensorflow.python.saved_model
# set Keras global configurations
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
float_type = 'float32'
float_type2 = 'fp32'
tf.keras.backend.set_floatx(float_type)
# load pre-trained model using Keras
model_name = 'resnet50_%s_keras'%float_type2
model = ResNet50(weights='imagenet')
# various save files
frozen_file = model_name + '.pb'
opt_file = model_name + '_opt.pb'
# obtain parameters
model_input = model.input.name.replace(':0', '')
model_output = model.output.name.replace(':0', '')
batch, height, width, channels = model.input.shape
print ("model, frozen file, optimized file, input size, input node, output node,")
print ("%s, %s, %s, %dx%dx%d, %s, %s" %(model_name, frozen_file, opt_file, width, height, channels, model_input, model_output) )
# obtain the TF session
sess = tf.compat.v1.keras.backend.get_session()
# save checkpoint files for freeze_graph
ckpt_file = '/tmp/' + model_name + '/' + model_name + '.ckpt'
graph_file = '/tmp/' + model_name + '/' + model_name + '.pb'
tf.compat.v1.train.Saver().save(sess, ckpt_file)
tf.io.write_graph(sess.graph.as_graph_def(), logdir='.', name=graph_file, as_text=False)
print(model_output)
with tf.compat.v1.Session(graph=tf.Graph()) as sess:
saver = tf.compat.v1.train.import_meta_graph(ckpt_file + '.meta')
saver.restore(sess, ckpt_file)
output_graph_def = tf.compat.v1.graph_util.convert_variables_to_constants(
sess, tf.compat.v1.get_default_graph().as_graph_def(), [model_output])
output_graph_def = tf.compat.v1.graph_util.remove_training_nodes(
output_graph_def, protected_nodes=[model_output])
with open(frozen_file, 'wb') as f:
f.write(output_graph_def.SerializeToString())
```
Optimize the extracted Keras ResNet50 FP32 graph for inference before casting (resnet50\_fp32\_keras\_opt.pb will be generated) with the following transformations to the graph:
- Remove Identity and CheckNumerics nodes
- Fold FusedBatchNorm constants into previous Conv2D weights
- Fold other constants
- Strip unused nodes
- Sort by execution order
```
import copy
import string
from google.protobuf import text_format
from tensorflow.core.framework import node_def_pb2
from tensorflow.core.framework import attr_value_pb2
from tensorflow.python.framework import tensor_util
from tensorflow.tools.graph_transforms import TransformGraph
def clear_input(node):
for i in range(len(node.input)):
node.input.pop()
def replace_name(node, name):
node.name = name
def replace_input(node, input_name, new_name):
# node.input.replace(input_name, new_name)
temp = []
for i in node.input:
temp.extend([new_name if i == input_name else i])
clear_input(node)
for i in temp:
node.input.extend([i])
def swap_names(node1, node2):
temp = node2.name
node2.name = node1.name
node1.name = temp
def get_const_node(const_node_name, const_by_name):
name = re.sub("/read$", "", const_node_name)
return const_by_name[name]
def get_const_ndarray(const_node_name, const_by_name):
name = re.sub("/read$", "", const_node_name)
node = const_by_name[name]
return tf.make_ndarray(node.attr.get("value").tensor)
def adjust_bias_values(bias_node, fbn_node, const_by_name):
bias_val = get_const_ndarray(bias_node.input[1], const_by_name)
gamma_val = get_const_ndarray(fbn_node.input[1], const_by_name)
mean_val = get_const_ndarray(fbn_node.input[3], const_by_name)
variance_val = get_const_ndarray(fbn_node.input[4], const_by_name)
new_bias = bias_val * gamma_val / np.sqrt(variance_val)
new_tensor = tensor_util.make_tensor_proto(new_bias, new_bias.dtype, new_bias.shape)
bias_const_node = get_const_node(bias_node.input[1], const_by_name)
bias_const_node.attr["value"].CopyFrom(attr_value_pb2.AttrValue(tensor=new_tensor))
def MoveBiasAddAfterFusedBatchNorm(graphdef):
"""fold_batch_norm function of TransformGraph is unable to fold Keras ResNet50
because of BiasAdd between Conv2D and FusedBatchNorm (BiasAdd is not needed
if FusedBatchNorm is used, but it exists in Keras ResNet50). Here, we
move BiasAdd to after FusedBatchNorm, and adjust bias value by gamma/sqrt(variance).
"""
sess = tf.compat.v1.Session(graph=tf.import_graph_def(graphdef))
output_graph_def = tf.compat.v1.GraphDef()
node_by_name = {}
const_by_name = {}
for node in graphdef.node:
# Hack: use FusedBatchNormV2 so fold_batch_norm can recognize
if node.op == "FusedBatchNormV3":
node.op = "FusedBatchNorm"
del(node.attr["U"])
#import pdb; pdb.set_trace()
copied_node = node_def_pb2.NodeDef()
copied_node.CopyFrom(node)
node_by_name[node.name] = copied_node
skip_add_node = False
# Switch Mul/BiasAdd in Keras RN50 so fold_batch_norm transform would work
if node.op == "Const":
const_by_name[node.name] = copied_node
elif node.op.startswith("FusedBatchNorm"):
inputs = node.input
for i in inputs:
input_node = node_by_name[i]
if input_node.op == "BiasAdd":
output_graph_def.node.remove(input_node)
input_node_input0 = input_node.input[0]
# Adjust bias values (multiply by scale/sqrt(variance))
adjust_bias_values(input_node, node, const_by_name)
# Hack: swap names to avoid changing input of activation
swap_names(copied_node, input_node)
# Fix inputs for these two ops
replace_input(copied_node, i, input_node_input0)
replace_input(input_node, input_node_input0, copied_node.name)
# Fix order in node list
output_graph_def.node.extend([copied_node])
output_graph_def.node.extend([input_node])
skip_add_node = True
# Add maybe-modified nodes if not already done
if not skip_add_node:
output_graph_def.node.extend([copied_node])
return output_graph_def
def FoldFusedBatchNorm(graph_def):
"""Optimize training graph for inference:
- Remove Identity and CheckNumerics nodes
- Fold FusedBatchNorm constants into previous Conv2D weights
- Fold other constants
- Strip unused nodes
- Sort by execution order
"""
transformed_graph_def = TransformGraph (
graph_def,
['input_1'],
['probs/Softmax'],
[
'add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics)',
'fold_constants(ignore_errors=true)',
'fold_batch_norms',
'fold_old_batch_norms',
'strip_unused_nodes',
'sort_by_execution_order',
])
return transformed_graph_def
def load_graph(model_file):
graph_def = tf.compat.v1.GraphDef()
with open(model_file, "rb") as f:
graph_def.ParseFromString(f.read())
return graph_def
graph_orig = load_graph('resnet50_fp32_keras.pb')
graph_mod = MoveBiasAddAfterFusedBatchNorm(graph_orig)
graph_mod2 = FoldFusedBatchNorm(graph_mod)
with tf.io.gfile.GFile('resnet50_fp32_keras_opt.pb', "wb") as f:
f.write(graph_mod2.SerializeToString())
```
Convert full graph to FP16 (resnet50\_fp16\_keras\_opt.pb will be generated. This will take about a minute.
```
from tensorflow.core.framework import graph_pb2
from tensorflow.python.platform import gfile
def ConvertFP32ToOther(graphdef):
"""Converts an FP32 network by casting all constants (weights) to a lower
precision floating point type (FP16) and updating the dtypes
everywhere."""
cast_type = "float16"
sess = tf.Session(graph=tf.import_graph_def(graphdef))
output_graph_def = graph_pb2.GraphDef()
dummy_tensor = sess.run(tf.constant([0.1]))
dummy_tensor_proto = tensor_util.make_tensor_proto(dummy_tensor, \
dtype=cast_type, shape=dummy_tensor.shape)
dummy_tensor32 = sess.run(tf.constant([0.1]))
dummy_tensor_proto32 = tensor_util.make_tensor_proto(dummy_tensor, \
dtype=tf.float32, shape=dummy_tensor.shape)
dt_float_type_attr = attr_value_pb2.AttrValue(type=dummy_tensor_proto32.dtype)
dt_half_type_attr = attr_value_pb2.AttrValue(type=dummy_tensor_proto.dtype)
for node in graphdef.node:
output_node = node_def_pb2.NodeDef()
output_node.CopyFrom(node)
if (node.op == "Const"):
if (node.attr["dtype"] == dt_float_type_attr):
a = tensor_util.MakeNdarray(node.attr["value"].tensor)
a = tf.cast(a, cast_type)
a = sess.run(a)
output_node.attr["dtype"].CopyFrom(dt_half_type_attr)
output_node.attr["value"].CopyFrom(
attr_value_pb2.AttrValue(
tensor=tensor_util.make_tensor_proto(a,\
dtype=cast_type, shape=a.shape)))
else:
if ("T" in node.attr.keys()):
if (output_node.attr["T"] == dt_float_type_attr):
output_node.attr["T"].CopyFrom(dt_half_type_attr)
if ("Tparams" in node.attr.keys()):
if (output_node.attr["Tparams"] == dt_float_type_attr):
output_node.attr["Tparams"].CopyFrom(dt_half_type_attr)
if ("dtype" in node.attr.keys()):
if (node.attr["dtype"] == dt_float_type_attr):
output_node.attr["dtype"].CopyFrom(dt_half_type_attr)
if ("SrcT" in node.attr.keys()):
if (node.attr["SrcT"] == dt_float_type_attr):
output_node.attr["SrcT"].CopyFrom(dt_half_type_attr)
if ("DstT" in node.attr.keys()):
if (node.attr["DstT"] == dt_float_type_attr):
output_node.attr["DstT"].CopyFrom(dt_half_type_attr)
output_graph_def.node.extend([output_node])
return output_graph_def
def load_graph(model_file):
graph_def = tf.GraphDef()
with open(model_file, "rb") as f:
graph_def.ParseFromString(f.read())
return graph_def
graph_f32 = load_graph('resnet50_fp32_keras_opt.pb')
graph_f16 = ConvertFP32ToOther(graph_f32)
output_xformed_graph_name = 'resnet50_fp16_keras_opt.pb'
with gfile.GFile(output_xformed_graph_name, "wb") as f:
f.write(graph_f16.SerializeToString())
```
Run the compilation script to sweep through various batch sizes up to 5 and several NeuronCore Group sizes up to 16. The script calls the compilation script pb2sm\_compile.py which tries to perform compilation. Some error messages are expected due to known issues (see Known Issues section in the tutorial). If you run all the configurations it will take about 45 minutes.
```
%%bash
#!/usr/bin/env bash
echo "" > full_sweep.log
echo "" > full_sweep_results.txt
results=()
for b in $(seq 1 5); do
for i in 1 2 4 8 12 16; do
python pb2sm_compile.py --batch_size=$b --neuroncore-pipeline-cores=$i | tee -a full_sweep.log;
results[$b]+=", "`tail -1 full_sweep.log`
done
done
head="batch"
for i in 1 2 4 8 12 16; do
head+=", nc${i}"
done
echo $head | tee -a full_sweep_results.txt
for b in $(seq 1 5); do
echo $b${results[$b]} | tee -a full_sweep_results.txt
done
```
You should see some output like this:
```
INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia
1
*** Batch size 1, num NeuronCores 2 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc2) ***
INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia
1
*** Batch size 1, num NeuronCores 4 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc4) ***
INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia
1
... (outputs removed)
*** Batch size 5, num NeuronCores 16 (input shape: (5, 224, 224, 3), saved model dir: rn50_fp16_compiled_b5_nc16) ***
ERROR: Compilation finished in 120 seconds with less than 50% operations placed on Inferentia (0.0%)
INFO: Retry compilation without static weights
ERROR: Retry compilation finished in 137 seconds with less than 50% operations placed on Inferentia (0.0%)
0
The file full_sweep_results.txt shows a summary of the sweep results with latest Neuron 1/27/20 release (0 means compilation unsuccessful and 0 ops mapped to Inferentia, 1 means most ops mapped to Inferentia and non-static weights, 2 means most ops mapped to Inferentia and using static weights):
batch, nc1, nc2, nc4, nc8, nc12, nc16
1, 1, 1, 1, 2, 2, 2
2, 1, 1, 0, 1, 2, 2
3, 1, 1, 1, 1, 1, 1
4, 1, 1, 0, 1, 1, 1
5, 1, 1, 0, 0, 0, 0
```
## Inference[#](#Inference "Permalink to this headline")
Run inference over different batch sizes and Neuroncore groups to obtain throughput and latency results for ResNet50. To apply dynamic batching, the user batch size is set to 10x the compiled batch size, in order to keep input queue full and to amortize framework-to-Neuron overhead.
Note: The results are based on the Neuron v1.12.2 (Mar 4th 2021) release. These will continue improve as we increase Neuron performance.
```
!cd ~/aws-neuron-sdk/src/examples/tensorflow/keras_resnet50/
!echo "" > batch.log
!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=1 | tee -a batch.log; done
!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=2 | tee -a batch.log; done
!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=4 | tee -a batch.log; done
!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=8 | tee -a batch.log; done
!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=12 | tee -a batch.log; done
!for i in $(seq 1 5); do python infer_resnet50_keras_loadtest.py --batch_size=$i --neuroncore-pipeline-cores=16 | tee -a batch.log; done
```
The file batch.log now contains the results for each batch size. We can look at the throughput values to get an idea of which models are performing well. The output should look something like this:
The model best model configuration for throughput (if you run on an Inf1.6xlarge as suggested in the tutorial) is batch size 5 NeuronCore group size 2. Increasing batch size usually helps to increase throughput (up to a certain extent).
```
*** Compiled batch size 5, user batch size 10, num NeuronCores 2 (input shape: (10, 224, 224, 3), saved model dir: ./rn50_fp16_compiled_b5_nc2/1) ***
Instance type inf1.6xlarge with 16 NeuronCores
NEURON_MAX_NUM_INFERS (env): 5
NEURONCORE_GROUP_SIZES (env): 2,2,2,2,2,2,2,2
NUM THREADS: 16
NUM_LOOPS_PER_THREAD: 400
USER_BATCH_SIZE: 10
Throughput values collected:
[10680, 10700, 10660]
(rest of outputs removed)
```
## Known Issues[#](#Known-Issues "Permalink to this headline")
### Unable to compile with batch and num NeuronCores combination[#](#Unable-to-compile-with-batch-and-num-NeuronCores-combination "Permalink to this headline")
For some combination of batch and number of NeuronCores setting, you may see an internal compiler error as below. Please see the sweep result above for Neuron 1/27/20 release. Furthermore, if using auto-casting to bfloat16 from FP32 network and batch size is larger than 1 would result in the same error.
```
INFO:tensorflow:fusing subgraph neuron_op_a73aed4b95ca5d5b with neuron-cc; log file is at /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neuron-cc.log
WARNING:tensorflow:Failed to fuse subgraph neuron_op_a73aed4b95ca5d5b with '/home/ubuntu/test_venv/bin/neuron-cc compile /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff --io-config "{\"inputs\": {\"input_10/_0:0\": [[6, 224, 224, 3], \"float16\"]}, \"outputs\": [\"probs/Softmax:0\"]}" --batching_en --rematerialization_en --sb_size 120 --spill_dis --enable-replication True'
WARNING:tensorflow:neuron-cc error message:
WARNING:tensorflow:01/23/2020 01:15:40 AM ERROR [neuron-cc]:
01/23/2020 01:15:40 AM ERROR [neuron-cc]: ***************************************************************
01/23/2020 01:15:40 AM ERROR [neuron-cc]: An Internal Compiler Error has occurred
01/23/2020 01:15:40 AM ERROR [neuron-cc]: ***************************************************************
01/23/2020 01:15:40 AM ERROR [neuron-cc]:
01/23/2020 01:15:40 AM ERROR [neuron-cc]: Please contact Customer Support and provide the following details.
01/23/2020 01:15:40 AM ERROR [neuron-cc]:
01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error message: Non-zero exit status (134) for command: /home/ubuntu/test_venv/lib/python3.6/site-packages/neuroncc/starfish/bin/list_sch --hhir hh-tr-external-move.json --verbose 0 --sb_size 120 --arith_intensity_target 2300 --sb_watermark_low 0.250000 --sb_watermark_high 0.750000 --sb_size_tol 1 --alloc simple1 --alloc_opt --depth_diff 0.100000 --verbose_start_cycle 0 --tt_dist --mm_meet_cnt 1 --load_speed_factor 0.300000 --schir sch_tmp.json --spill_depth_limit 5 --spill_dis --true_dep --mm_order --batching_en --rematerialization_en
01/23/2020 01:15:40 AM ERROR [neuron-cc]:
01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error class: CompilerInternalError
01/23/2020 01:15:40 AM ERROR [neuron-cc]: Error location: job.Scheduler.3
01/23/2020 01:15:40 AM ERROR [neuron-cc]: Command line: /home/ubuntu/test_venv/bin/neuron-cc compile /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff --io-config '{"inputs": {"input_10/_0:0": [[6, 224, 224, 3], "float16"]}, "outputs": ["probs/Softmax:0"]}' --batching_en --rematerialization_en --sb_size 120 --spill_dis --enable-replication True
01/23/2020 01:15:40 AM ERROR [neuron-cc]:
01/23/2020 01:15:40 AM ERROR [neuron-cc]: Internal details:
01/23/2020 01:15:40 AM ERROR [neuron-cc]: File "neuroncc/driver/Job.py", line 207, in neuroncc.driver.Job.runSingleInputFn
01/23/2020 01:15:40 AM ERROR [neuron-cc]: File "neuroncc/driver/jobs/Scheduler.py", line 58, in neuroncc.driver.jobs.Scheduler.Scheduler.runSingleInput
01/23/2020 01:15:40 AM ERROR [neuron-cc]: File "neuroncc/driver/Job.py", line 145, in neuroncc.driver.Job.Job.shellCommand
01/23/2020 01:15:40 AM ERROR [neuron-cc]:
01/23/2020 01:15:40 AM ERROR [neuron-cc]: Version information:
01/23/2020 01:15:41 AM ERROR [neuron-cc]: Neuron Compiler version 1.0.6632.0+6001610955
01/23/2020 01:15:41 AM ERROR [neuron-cc]:
01/23/2020 01:15:41 AM ERROR [neuron-cc]: HWM version 1.0.839.0-6001300654
01/23/2020 01:15:41 AM ERROR [neuron-cc]: NEFF version 0.6
01/23/2020 01:15:41 AM ERROR [neuron-cc]: TVM version 1.0.1589.0+6001610955
01/23/2020 01:15:41 AM ERROR [neuron-cc]: NumPy version 1.16.5
01/23/2020 01:15:41 AM ERROR [neuron-cc]: MXNet not available
01/23/2020 01:15:41 AM ERROR [neuron-cc]: TF version 1.15.0
01/23/2020 01:15:41 AM ERROR [neuron-cc]:
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Tensorflow ResNet 50 Optimization Tutorial — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="Natural Language Processing (NLP) Tutorials (tensorflow-neuron)" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
<link rel="prev" title="Running SSD300 with AWS Neuron" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/tensorflow/keras_resnet50/keras_resnet50", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"><style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/src/examples/tensorflow/keras_resnet50/keras_resnet50.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/tensorflow/keras_resnet50/keras_resnet50.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies">
Install Dependencies
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile">
Compile
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Inference">
Inference
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Known-Issues">
Known Issues
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Unable-to-compile-with-batch-and-num-NeuronCores-combination">
Unable to compile with batch and num NeuronCores combination
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Tensorflow ResNet 50 Optimization Tutorial</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-Dependencies">
Install Dependencies
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile">
Compile
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Inference">
Inference
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Known-Issues">
Known Issues
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Unable-to-compile-with-batch-and-num-NeuronCores-combination">
Unable to compile with batch and num NeuronCores combination
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Tensorflow-ResNet-50-Optimization-Tutorial">
<h1>Tensorflow ResNet 50 Optimization Tutorial<a class="headerlink" href="#Tensorflow-ResNet-50-Optimization-Tutorial" title="Permalink to this headline">#</a></h1>
<div class="section" id="Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
<h2>Note: this tutorial runs on tensorflow-neuron 1.x only<a class="headerlink" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only" title="Permalink to this headline">#</a></h2>
</div>
<div class="section" id="Introduction:">
<h2>Introduction:<a class="headerlink" href="#Introduction:" title="Permalink to this headline">#</a></h2>
<p>In this tutorial we provide three main sections:</p>
<ul class="simple">
<li><p>Take a Resnet 50 model and perform optimizations on it</p></li>
<li><p>Compile the model with different batch sizes and Neuroncore Group sizes (read about Neuroncore Group sizes here: <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-runtime/nrt-theory-of-operation.html#neuron-core-group">https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-runtime/nrt-theory-of-operation.html#neuron-core-group</a>)</p></li>
<li><p>Run inference on our multiple compiled models to see which has the best throughput</p></li>
</ul>
<p>Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the <a class="reference external" href="../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow">Tensorflow Installation Guide</a>. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.</p>
</div>
<div class="section" id="Install-Dependencies">
<h2>Install Dependencies<a class="headerlink" href="#Install-Dependencies" title="Permalink to this headline">#</a></h2>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span>pillow<span class="w"> </span>requests<span class="w"> </span>#<span class="w"> </span>Necessary<span class="w"> </span><span class="k">for</span><span class="w"> </span>loading<span class="w"> </span>images
<span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span><span class="s1">'tensorflow-neuron<2'</span><span class="w"> </span>--extra-index-url<span class="o">=</span>https://pip.repos.neuron.amazonaws.com
</pre></div>
</div>
</div>
</div>
<div class="section" id="Compile">
<h2>Compile<a class="headerlink" href="#Compile" title="Permalink to this headline">#</a></h2>
<p>The following example shows how to compile a FP16 ResNet50 network using various batching parameters to find the optimal solution. On inf1.6xlarge, run through the following steps to get a optimized Resnet 50 model. First, extract Keras ResNet50 FP32 (resnet50_fp32_keras.pb will be generated):</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">re</span>
<span class="kn">import</span> <span class="nn">argparse</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">ResNet50</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.preprocessing</span> <span class="kn">import</span> <span class="n">image</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">preprocess_input</span><span class="p">,</span> <span class="n">decode_predictions</span>
<span class="kn">from</span> <span class="nn">google.protobuf</span> <span class="kn">import</span> <span class="n">text_format</span>
<span class="kn">import</span> <span class="nn">tensorflow.python.saved_model</span>
<span class="c1"># set Keras global configurations</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_learning_phase</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_image_data_format</span><span class="p">(</span><span class="s1">'channels_last'</span><span class="p">)</span>
<span class="n">float_type</span> <span class="o">=</span> <span class="s1">'float32'</span>
<span class="n">float_type2</span> <span class="o">=</span> <span class="s1">'fp32'</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_floatx</span><span class="p">(</span><span class="n">float_type</span><span class="p">)</span>
<span class="c1"># load pre-trained model using Keras</span>
<span class="n">model_name</span> <span class="o">=</span> <span class="s1">'resnet50_</span><span class="si">%s</span><span class="s1">_keras'</span><span class="o">%</span><span class="k">float_type2</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">ResNet50</span><span class="p">(</span><span class="n">weights</span><span class="o">=</span><span class="s1">'imagenet'</span><span class="p">)</span>
<span class="c1"># various save files</span>
<span class="n">frozen_file</span> <span class="o">=</span> <span class="n">model_name</span> <span class="o">+</span> <span class="s1">'.pb'</span>
<span class="n">opt_file</span> <span class="o">=</span> <span class="n">model_name</span> <span class="o">+</span> <span class="s1">'_opt.pb'</span>
<span class="c1"># obtain parameters</span>
<span class="n">model_input</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">input</span><span class="o">.</span><span class="n">name</span><span class="o">.</span><span class="n">replace</span><span class="p">(</span><span class="s1">':0'</span><span class="p">,</span> <span class="s1">''</span><span class="p">)</span>
<span class="n">model_output</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">output</span><span class="o">.</span><span class="n">name</span><span class="o">.</span><span class="n">replace</span><span class="p">(</span><span class="s1">':0'</span><span class="p">,</span> <span class="s1">''</span><span class="p">)</span>
<span class="n">batch</span><span class="p">,</span> <span class="n">height</span><span class="p">,</span> <span class="n">width</span><span class="p">,</span> <span class="n">channels</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">input</span><span class="o">.</span><span class="n">shape</span>
<span class="nb">print</span> <span class="p">(</span><span class="s2">"model, frozen file, optimized file, input size, input node, output node,"</span><span class="p">)</span>
<span class="nb">print</span> <span class="p">(</span><span class="s2">"</span><span class="si">%s</span><span class="s2">, </span><span class="si">%s</span><span class="s2">, </span><span class="si">%s</span><span class="s2">, </span><span class="si">%d</span><span class="s2">x</span><span class="si">%d</span><span class="s2">x</span><span class="si">%d</span><span class="s2">, </span><span class="si">%s</span><span class="s2">, </span><span class="si">%s</span><span class="s2">"</span> <span class="o">%</span><span class="p">(</span><span class="n">model_name</span><span class="p">,</span> <span class="n">frozen_file</span><span class="p">,</span> <span class="n">opt_file</span><span class="p">,</span> <span class="n">width</span><span class="p">,</span> <span class="n">height</span><span class="p">,</span> <span class="n">channels</span><span class="p">,</span> <span class="n">model_input</span><span class="p">,</span> <span class="n">model_output</span><span class="p">)</span> <span class="p">)</span>
<span class="c1"># obtain the TF session</span>
<span class="n">sess</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">get_session</span><span class="p">()</span>
<span class="c1"># save checkpoint files for freeze_graph</span>
<span class="n">ckpt_file</span> <span class="o">=</span> <span class="s1">'/tmp/'</span> <span class="o">+</span> <span class="n">model_name</span> <span class="o">+</span> <span class="s1">'/'</span> <span class="o">+</span> <span class="n">model_name</span> <span class="o">+</span> <span class="s1">'.ckpt'</span>
<span class="n">graph_file</span> <span class="o">=</span> <span class="s1">'/tmp/'</span> <span class="o">+</span> <span class="n">model_name</span> <span class="o">+</span> <span class="s1">'/'</span> <span class="o">+</span> <span class="n">model_name</span> <span class="o">+</span> <span class="s1">'.pb'</span>
<span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">Saver</span><span class="p">()</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">sess</span><span class="p">,</span> <span class="n">ckpt_file</span><span class="p">)</span>
<span class="n">tf</span><span class="o">.</span><span class="n">io</span><span class="o">.</span><span class="n">write_graph</span><span class="p">(</span><span class="n">sess</span><span class="o">.</span><span class="n">graph</span><span class="o">.</span><span class="n">as_graph_def</span><span class="p">(),</span> <span class="n">logdir</span><span class="o">=</span><span class="s1">'.'</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="n">graph_file</span><span class="p">,</span> <span class="n">as_text</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">model_output</span><span class="p">)</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">())</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">saver</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">import_meta_graph</span><span class="p">(</span><span class="n">ckpt_file</span> <span class="o">+</span> <span class="s1">'.meta'</span><span class="p">)</span>
<span class="n">saver</span><span class="o">.</span><span class="n">restore</span><span class="p">(</span><span class="n">sess</span><span class="p">,</span> <span class="n">ckpt_file</span><span class="p">)</span>
<span class="n">output_graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">graph_util</span><span class="o">.</span><span class="n">convert_variables_to_constants</span><span class="p">(</span>
<span class="n">sess</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">get_default_graph</span><span class="p">()</span><span class="o">.</span><span class="n">as_graph_def</span><span class="p">(),</span> <span class="p">[</span><span class="n">model_output</span><span class="p">])</span>
<span class="n">output_graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">graph_util</span><span class="o">.</span><span class="n">remove_training_nodes</span><span class="p">(</span>
<span class="n">output_graph_def</span><span class="p">,</span> <span class="n">protected_nodes</span><span class="o">=</span><span class="p">[</span><span class="n">model_output</span><span class="p">])</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">frozen_file</span><span class="p">,</span> <span class="s1">'wb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">f</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">output_graph_def</span><span class="o">.</span><span class="n">SerializeToString</span><span class="p">())</span>
</pre></div>
</div>
</div>
<p>Optimize the extracted Keras ResNet50 FP32 graph for inference before casting (resnet50_fp32_keras_opt.pb will be generated) with the following transformations to the graph:</p>
<ul class="simple">
<li><p>Remove Identity and CheckNumerics nodes</p></li>
<li><p>Fold FusedBatchNorm constants into previous Conv2D weights</p></li>
<li><p>Fold other constants</p></li>
<li><p>Strip unused nodes</p></li>
<li><p>Sort by execution order</p></li>
</ul>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">copy</span>
<span class="kn">import</span> <span class="nn">string</span>
<span class="kn">from</span> <span class="nn">google.protobuf</span> <span class="kn">import</span> <span class="n">text_format</span>
<span class="kn">from</span> <span class="nn">tensorflow.core.framework</span> <span class="kn">import</span> <span class="n">node_def_pb2</span>
<span class="kn">from</span> <span class="nn">tensorflow.core.framework</span> <span class="kn">import</span> <span class="n">attr_value_pb2</span>
<span class="kn">from</span> <span class="nn">tensorflow.python.framework</span> <span class="kn">import</span> <span class="n">tensor_util</span>
<span class="kn">from</span> <span class="nn">tensorflow.tools.graph_transforms</span> <span class="kn">import</span> <span class="n">TransformGraph</span>
<span class="k">def</span> <span class="nf">clear_input</span><span class="p">(</span><span class="n">node</span><span class="p">):</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">input</span><span class="p">)):</span>
<span class="n">node</span><span class="o">.</span><span class="n">input</span><span class="o">.</span><span class="n">pop</span><span class="p">()</span>
<span class="k">def</span> <span class="nf">replace_name</span><span class="p">(</span><span class="n">node</span><span class="p">,</span> <span class="n">name</span><span class="p">):</span>
<span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="n">name</span>
<span class="k">def</span> <span class="nf">replace_input</span><span class="p">(</span><span class="n">node</span><span class="p">,</span> <span class="n">input_name</span><span class="p">,</span> <span class="n">new_name</span><span class="p">):</span>
<span class="c1"># node.input.replace(input_name, new_name)</span>
<span class="n">temp</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">node</span><span class="o">.</span><span class="n">input</span><span class="p">:</span>
<span class="n">temp</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span><span class="n">new_name</span> <span class="k">if</span> <span class="n">i</span> <span class="o">==</span> <span class="n">input_name</span> <span class="k">else</span> <span class="n">i</span><span class="p">])</span>
<span class="n">clear_input</span><span class="p">(</span><span class="n">node</span><span class="p">)</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">temp</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">input</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span><span class="n">i</span><span class="p">])</span>
<span class="k">def</span> <span class="nf">swap_names</span><span class="p">(</span><span class="n">node1</span><span class="p">,</span> <span class="n">node2</span><span class="p">):</span>
<span class="n">temp</span> <span class="o">=</span> <span class="n">node2</span><span class="o">.</span><span class="n">name</span>
<span class="n">node2</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="n">node1</span><span class="o">.</span><span class="n">name</span>
<span class="n">node1</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="n">temp</span>
<span class="k">def</span> <span class="nf">get_const_node</span><span class="p">(</span><span class="n">const_node_name</span><span class="p">,</span> <span class="n">const_by_name</span><span class="p">):</span>
<span class="n">name</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">sub</span><span class="p">(</span><span class="s2">"/read$"</span><span class="p">,</span> <span class="s2">""</span><span class="p">,</span> <span class="n">const_node_name</span><span class="p">)</span>
<span class="k">return</span> <span class="n">const_by_name</span><span class="p">[</span><span class="n">name</span><span class="p">]</span>
<span class="k">def</span> <span class="nf">get_const_ndarray</span><span class="p">(</span><span class="n">const_node_name</span><span class="p">,</span> <span class="n">const_by_name</span><span class="p">):</span>
<span class="n">name</span> <span class="o">=</span> <span class="n">re</span><span class="o">.</span><span class="n">sub</span><span class="p">(</span><span class="s2">"/read$"</span><span class="p">,</span> <span class="s2">""</span><span class="p">,</span> <span class="n">const_node_name</span><span class="p">)</span>
<span class="n">node</span> <span class="o">=</span> <span class="n">const_by_name</span><span class="p">[</span><span class="n">name</span><span class="p">]</span>
<span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">make_ndarray</span><span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">"value"</span><span class="p">)</span><span class="o">.</span><span class="n">tensor</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">adjust_bias_values</span><span class="p">(</span><span class="n">bias_node</span><span class="p">,</span> <span class="n">fbn_node</span><span class="p">,</span> <span class="n">const_by_name</span><span class="p">):</span>
<span class="n">bias_val</span> <span class="o">=</span> <span class="n">get_const_ndarray</span><span class="p">(</span><span class="n">bias_node</span><span class="o">.</span><span class="n">input</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">const_by_name</span><span class="p">)</span>
<span class="n">gamma_val</span> <span class="o">=</span> <span class="n">get_const_ndarray</span><span class="p">(</span><span class="n">fbn_node</span><span class="o">.</span><span class="n">input</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">const_by_name</span><span class="p">)</span>
<span class="n">mean_val</span> <span class="o">=</span> <span class="n">get_const_ndarray</span><span class="p">(</span><span class="n">fbn_node</span><span class="o">.</span><span class="n">input</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="n">const_by_name</span><span class="p">)</span>
<span class="n">variance_val</span> <span class="o">=</span> <span class="n">get_const_ndarray</span><span class="p">(</span><span class="n">fbn_node</span><span class="o">.</span><span class="n">input</span><span class="p">[</span><span class="mi">4</span><span class="p">],</span> <span class="n">const_by_name</span><span class="p">)</span>
<span class="n">new_bias</span> <span class="o">=</span> <span class="n">bias_val</span> <span class="o">*</span> <span class="n">gamma_val</span> <span class="o">/</span> <span class="n">np</span><span class="o">.</span><span class="n">sqrt</span><span class="p">(</span><span class="n">variance_val</span><span class="p">)</span>
<span class="n">new_tensor</span> <span class="o">=</span> <span class="n">tensor_util</span><span class="o">.</span><span class="n">make_tensor_proto</span><span class="p">(</span><span class="n">new_bias</span><span class="p">,</span> <span class="n">new_bias</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span> <span class="n">new_bias</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="n">bias_const_node</span> <span class="o">=</span> <span class="n">get_const_node</span><span class="p">(</span><span class="n">bias_node</span><span class="o">.</span><span class="n">input</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">const_by_name</span><span class="p">)</span>
<span class="n">bias_const_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"value"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">attr_value_pb2</span><span class="o">.</span><span class="n">AttrValue</span><span class="p">(</span><span class="n">tensor</span><span class="o">=</span><span class="n">new_tensor</span><span class="p">))</span>
<span class="k">def</span> <span class="nf">MoveBiasAddAfterFusedBatchNorm</span><span class="p">(</span><span class="n">graphdef</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""fold_batch_norm function of TransformGraph is unable to fold Keras ResNet50</span>
<span class="sd"> because of BiasAdd between Conv2D and FusedBatchNorm (BiasAdd is not needed</span>
<span class="sd"> if FusedBatchNorm is used, but it exists in Keras ResNet50). Here, we</span>
<span class="sd"> move BiasAdd to after FusedBatchNorm, and adjust bias value by gamma/sqrt(variance).</span>
<span class="sd"> """</span>
<span class="n">sess</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">graphdef</span><span class="p">))</span>
<span class="n">output_graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="n">node_by_name</span> <span class="o">=</span> <span class="p">{}</span>
<span class="n">const_by_name</span> <span class="o">=</span> <span class="p">{}</span>
<span class="k">for</span> <span class="n">node</span> <span class="ow">in</span> <span class="n">graphdef</span><span class="o">.</span><span class="n">node</span><span class="p">:</span>
<span class="c1"># Hack: use FusedBatchNormV2 so fold_batch_norm can recognize</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s2">"FusedBatchNormV3"</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">=</span> <span class="s2">"FusedBatchNorm"</span>
<span class="k">del</span><span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"U"</span><span class="p">])</span>
<span class="c1">#import pdb; pdb.set_trace()</span>
<span class="n">copied_node</span> <span class="o">=</span> <span class="n">node_def_pb2</span><span class="o">.</span><span class="n">NodeDef</span><span class="p">()</span>
<span class="n">copied_node</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">node</span><span class="p">)</span>
<span class="n">node_by_name</span><span class="p">[</span><span class="n">node</span><span class="o">.</span><span class="n">name</span><span class="p">]</span> <span class="o">=</span> <span class="n">copied_node</span>
<span class="n">skip_add_node</span> <span class="o">=</span> <span class="kc">False</span>
<span class="c1"># Switch Mul/BiasAdd in Keras RN50 so fold_batch_norm transform would work</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s2">"Const"</span><span class="p">:</span>
<span class="n">const_by_name</span><span class="p">[</span><span class="n">node</span><span class="o">.</span><span class="n">name</span><span class="p">]</span> <span class="o">=</span> <span class="n">copied_node</span>
<span class="k">elif</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span><span class="o">.</span><span class="n">startswith</span><span class="p">(</span><span class="s2">"FusedBatchNorm"</span><span class="p">):</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">node</span><span class="o">.</span><span class="n">input</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">inputs</span><span class="p">:</span>
<span class="n">input_node</span> <span class="o">=</span> <span class="n">node_by_name</span><span class="p">[</span><span class="n">i</span><span class="p">]</span>
<span class="k">if</span> <span class="n">input_node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s2">"BiasAdd"</span><span class="p">:</span>
<span class="n">output_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">remove</span><span class="p">(</span><span class="n">input_node</span><span class="p">)</span>
<span class="n">input_node_input0</span> <span class="o">=</span> <span class="n">input_node</span><span class="o">.</span><span class="n">input</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="c1"># Adjust bias values (multiply by scale/sqrt(variance))</span>
<span class="n">adjust_bias_values</span><span class="p">(</span><span class="n">input_node</span><span class="p">,</span> <span class="n">node</span><span class="p">,</span> <span class="n">const_by_name</span><span class="p">)</span>
<span class="c1"># Hack: swap names to avoid changing input of activation</span>
<span class="n">swap_names</span><span class="p">(</span><span class="n">copied_node</span><span class="p">,</span> <span class="n">input_node</span><span class="p">)</span>
<span class="c1"># Fix inputs for these two ops</span>
<span class="n">replace_input</span><span class="p">(</span><span class="n">copied_node</span><span class="p">,</span> <span class="n">i</span><span class="p">,</span> <span class="n">input_node_input0</span><span class="p">)</span>
<span class="n">replace_input</span><span class="p">(</span><span class="n">input_node</span><span class="p">,</span> <span class="n">input_node_input0</span><span class="p">,</span> <span class="n">copied_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="c1"># Fix order in node list</span>
<span class="n">output_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span><span class="n">copied_node</span><span class="p">])</span>
<span class="n">output_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span><span class="n">input_node</span><span class="p">])</span>
<span class="n">skip_add_node</span> <span class="o">=</span> <span class="kc">True</span>
<span class="c1"># Add maybe-modified nodes if not already done</span>
<span class="k">if</span> <span class="ow">not</span> <span class="n">skip_add_node</span><span class="p">:</span>
<span class="n">output_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span><span class="n">copied_node</span><span class="p">])</span>
<span class="k">return</span> <span class="n">output_graph_def</span>
<span class="k">def</span> <span class="nf">FoldFusedBatchNorm</span><span class="p">(</span><span class="n">graph_def</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Optimize training graph for inference:</span>
<span class="sd"> - Remove Identity and CheckNumerics nodes</span>
<span class="sd"> - Fold FusedBatchNorm constants into previous Conv2D weights</span>
<span class="sd"> - Fold other constants</span>
<span class="sd"> - Strip unused nodes</span>
<span class="sd"> - Sort by execution order</span>
<span class="sd"> """</span>
<span class="n">transformed_graph_def</span> <span class="o">=</span> <span class="n">TransformGraph</span> <span class="p">(</span>
<span class="n">graph_def</span><span class="p">,</span>
<span class="p">[</span><span class="s1">'input_1'</span><span class="p">],</span>
<span class="p">[</span><span class="s1">'probs/Softmax'</span><span class="p">],</span>
<span class="p">[</span>
<span class="s1">'add_default_attributes'</span><span class="p">,</span>
<span class="s1">'remove_nodes(op=Identity, op=CheckNumerics)'</span><span class="p">,</span>
<span class="s1">'fold_constants(ignore_errors=true)'</span><span class="p">,</span>
<span class="s1">'fold_batch_norms'</span><span class="p">,</span>
<span class="s1">'fold_old_batch_norms'</span><span class="p">,</span>
<span class="s1">'strip_unused_nodes'</span><span class="p">,</span>
<span class="s1">'sort_by_execution_order'</span><span class="p">,</span>
<span class="p">])</span>
<span class="k">return</span> <span class="n">transformed_graph_def</span>
<span class="k">def</span> <span class="nf">load_graph</span><span class="p">(</span><span class="n">model_file</span><span class="p">):</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">compat</span><span class="o">.</span><span class="n">v1</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">model_file</span><span class="p">,</span> <span class="s2">"rb"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">graph_def</span><span class="o">.</span><span class="n">ParseFromString</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">())</span>
<span class="k">return</span> <span class="n">graph_def</span>
<span class="n">graph_orig</span> <span class="o">=</span> <span class="n">load_graph</span><span class="p">(</span><span class="s1">'resnet50_fp32_keras.pb'</span><span class="p">)</span>
<span class="n">graph_mod</span> <span class="o">=</span> <span class="n">MoveBiasAddAfterFusedBatchNorm</span><span class="p">(</span><span class="n">graph_orig</span><span class="p">)</span>
<span class="n">graph_mod2</span> <span class="o">=</span> <span class="n">FoldFusedBatchNorm</span><span class="p">(</span><span class="n">graph_mod</span><span class="p">)</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">io</span><span class="o">.</span><span class="n">gfile</span><span class="o">.</span><span class="n">GFile</span><span class="p">(</span><span class="s1">'resnet50_fp32_keras_opt.pb'</span><span class="p">,</span> <span class="s2">"wb"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">f</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">graph_mod2</span><span class="o">.</span><span class="n">SerializeToString</span><span class="p">())</span>
</pre></div>
</div>
</div>
<p>Convert full graph to FP16 (resnet50_fp16_keras_opt.pb will be generated. This will take about a minute.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">tensorflow.core.framework</span> <span class="kn">import</span> <span class="n">graph_pb2</span>
<span class="kn">from</span> <span class="nn">tensorflow.python.platform</span> <span class="kn">import</span> <span class="n">gfile</span>
<span class="k">def</span> <span class="nf">ConvertFP32ToOther</span><span class="p">(</span><span class="n">graphdef</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Converts an FP32 network by casting all constants (weights) to a lower</span>
<span class="sd"> precision floating point type (FP16) and updating the dtypes</span>
<span class="sd"> everywhere."""</span>
<span class="n">cast_type</span> <span class="o">=</span> <span class="s2">"float16"</span>
<span class="n">sess</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">graphdef</span><span class="p">))</span>
<span class="n">output_graph_def</span> <span class="o">=</span> <span class="n">graph_pb2</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="n">dummy_tensor</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">([</span><span class="mf">0.1</span><span class="p">]))</span>
<span class="n">dummy_tensor_proto</span> <span class="o">=</span> <span class="n">tensor_util</span><span class="o">.</span><span class="n">make_tensor_proto</span><span class="p">(</span><span class="n">dummy_tensor</span><span class="p">,</span> \
<span class="n">dtype</span><span class="o">=</span><span class="n">cast_type</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="n">dummy_tensor</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="n">dummy_tensor32</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">([</span><span class="mf">0.1</span><span class="p">]))</span>
<span class="n">dummy_tensor_proto32</span> <span class="o">=</span> <span class="n">tensor_util</span><span class="o">.</span><span class="n">make_tensor_proto</span><span class="p">(</span><span class="n">dummy_tensor</span><span class="p">,</span> \
<span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="n">dummy_tensor</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="n">dt_float_type_attr</span> <span class="o">=</span> <span class="n">attr_value_pb2</span><span class="o">.</span><span class="n">AttrValue</span><span class="p">(</span><span class="nb">type</span><span class="o">=</span><span class="n">dummy_tensor_proto32</span><span class="o">.</span><span class="n">dtype</span><span class="p">)</span>
<span class="n">dt_half_type_attr</span> <span class="o">=</span> <span class="n">attr_value_pb2</span><span class="o">.</span><span class="n">AttrValue</span><span class="p">(</span><span class="nb">type</span><span class="o">=</span><span class="n">dummy_tensor_proto</span><span class="o">.</span><span class="n">dtype</span><span class="p">)</span>
<span class="k">for</span> <span class="n">node</span> <span class="ow">in</span> <span class="n">graphdef</span><span class="o">.</span><span class="n">node</span><span class="p">:</span>
<span class="n">output_node</span> <span class="o">=</span> <span class="n">node_def_pb2</span><span class="o">.</span><span class="n">NodeDef</span><span class="p">()</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">node</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s2">"Const"</span><span class="p">):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"dtype"</span><span class="p">]</span> <span class="o">==</span> <span class="n">dt_float_type_attr</span><span class="p">):</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">tensor_util</span><span class="o">.</span><span class="n">MakeNdarray</span><span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"value"</span><span class="p">]</span><span class="o">.</span><span class="n">tensor</span><span class="p">)</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">a</span><span class="p">,</span> <span class="n">cast_type</span><span class="p">)</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">a</span><span class="p">)</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"dtype"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">dt_half_type_attr</span><span class="p">)</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"value"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span>
<span class="n">attr_value_pb2</span><span class="o">.</span><span class="n">AttrValue</span><span class="p">(</span>
<span class="n">tensor</span><span class="o">=</span><span class="n">tensor_util</span><span class="o">.</span><span class="n">make_tensor_proto</span><span class="p">(</span><span class="n">a</span><span class="p">,</span>\
<span class="n">dtype</span><span class="o">=</span><span class="n">cast_type</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="n">a</span><span class="o">.</span><span class="n">shape</span><span class="p">)))</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">if</span> <span class="p">(</span><span class="s2">"T"</span> <span class="ow">in</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="o">.</span><span class="n">keys</span><span class="p">()):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"T"</span><span class="p">]</span> <span class="o">==</span> <span class="n">dt_float_type_attr</span><span class="p">):</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"T"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">dt_half_type_attr</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="s2">"Tparams"</span> <span class="ow">in</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="o">.</span><span class="n">keys</span><span class="p">()):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"Tparams"</span><span class="p">]</span> <span class="o">==</span> <span class="n">dt_float_type_attr</span><span class="p">):</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"Tparams"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">dt_half_type_attr</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="s2">"dtype"</span> <span class="ow">in</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="o">.</span><span class="n">keys</span><span class="p">()):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"dtype"</span><span class="p">]</span> <span class="o">==</span> <span class="n">dt_float_type_attr</span><span class="p">):</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"dtype"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">dt_half_type_attr</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="s2">"SrcT"</span> <span class="ow">in</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="o">.</span><span class="n">keys</span><span class="p">()):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"SrcT"</span><span class="p">]</span> <span class="o">==</span> <span class="n">dt_float_type_attr</span><span class="p">):</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"SrcT"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">dt_half_type_attr</span><span class="p">)</span>
<span class="k">if</span> <span class="p">(</span><span class="s2">"DstT"</span> <span class="ow">in</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="o">.</span><span class="n">keys</span><span class="p">()):</span>
<span class="k">if</span> <span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"DstT"</span><span class="p">]</span> <span class="o">==</span> <span class="n">dt_float_type_attr</span><span class="p">):</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s2">"DstT"</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span><span class="n">dt_half_type_attr</span><span class="p">)</span>
<span class="n">output_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span><span class="n">output_node</span><span class="p">])</span>
<span class="k">return</span> <span class="n">output_graph_def</span>
<span class="k">def</span> <span class="nf">load_graph</span><span class="p">(</span><span class="n">model_file</span><span class="p">):</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">model_file</span><span class="p">,</span> <span class="s2">"rb"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">graph_def</span><span class="o">.</span><span class="n">ParseFromString</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">())</span>
<span class="k">return</span> <span class="n">graph_def</span>
<span class="n">graph_f32</span> <span class="o">=</span> <span class="n">load_graph</span><span class="p">(</span><span class="s1">'resnet50_fp32_keras_opt.pb'</span><span class="p">)</span>
<span class="n">graph_f16</span> <span class="o">=</span> <span class="n">ConvertFP32ToOther</span><span class="p">(</span><span class="n">graph_f32</span><span class="p">)</span>
<span class="n">output_xformed_graph_name</span> <span class="o">=</span> <span class="s1">'resnet50_fp16_keras_opt.pb'</span>
<span class="k">with</span> <span class="n">gfile</span><span class="o">.</span><span class="n">GFile</span><span class="p">(</span><span class="n">output_xformed_graph_name</span><span class="p">,</span> <span class="s2">"wb"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">f</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">graph_f16</span><span class="o">.</span><span class="n">SerializeToString</span><span class="p">())</span>
<br></pre></div>
</div>
</div>
<p>Run the compilation script to sweep through various batch sizes up to 5 and several NeuronCore Group sizes up to 16. The script calls the compilation script pb2sm_compile.py which tries to perform compilation. Some error messages are expected due to known issues (see Known Issues section in the tutorial). If you run all the configurations it will take about 45 minutes.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-bash notranslate"><div class="highlight"><pre><span></span>%%bash
<span class="c1">#!/usr/bin/env bash</span>
<span class="nb">echo</span><span class="w"> </span><span class="s2">""</span><span class="w"> </span>><span class="w"> </span>full_sweep.log
<span class="nb">echo</span><span class="w"> </span><span class="s2">""</span><span class="w"> </span>><span class="w"> </span>full_sweep_results.txt
<span class="nv">results</span><span class="o">=()</span>
<span class="k">for</span><span class="w"> </span>b<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span>
<span class="w"> </span><span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">2</span><span class="w"> </span><span class="m">4</span><span class="w"> </span><span class="m">8</span><span class="w"> </span><span class="m">12</span><span class="w"> </span><span class="m">16</span><span class="p">;</span><span class="w"> </span><span class="k">do</span>
<span class="w"> </span>python<span class="w"> </span>pb2sm_compile.py<span class="w"> </span>--batch_size<span class="o">=</span><span class="nv">$b</span><span class="w"> </span>--neuroncore-pipeline-cores<span class="o">=</span><span class="nv">$i</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>full_sweep.log<span class="p">;</span>
<span class="w"> </span>results<span class="o">[</span><span class="nv">$b</span><span class="o">]</span>+<span class="o">=</span><span class="s2">", "</span><span class="sb">`</span>tail<span class="w"> </span>-1<span class="w"> </span>full_sweep.log<span class="sb">`</span>
<span class="w"> </span><span class="k">done</span>
<span class="k">done</span>
<span class="nv">head</span><span class="o">=</span><span class="s2">"batch"</span>
<span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">2</span><span class="w"> </span><span class="m">4</span><span class="w"> </span><span class="m">8</span><span class="w"> </span><span class="m">12</span><span class="w"> </span><span class="m">16</span><span class="p">;</span><span class="w"> </span><span class="k">do</span>
<span class="w"> </span><span class="nv">head</span><span class="o">+=</span><span class="s2">", nc</span><span class="si">${</span><span class="nv">i</span><span class="si">}</span><span class="s2">"</span>
<span class="k">done</span>
<span class="nb">echo</span><span class="w"> </span><span class="nv">$head</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>full_sweep_results.txt
<span class="k">for</span><span class="w"> </span>b<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span>
<span class="w"> </span><span class="nb">echo</span><span class="w"> </span><span class="nv">$b</span><span class="si">${</span><span class="nv">results</span><span class="p">[</span><span class="nv">$b</span><span class="p">]</span><span class="si">}</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>full_sweep_results.txt
<span class="k">done</span>
</pre></div>
</div>
</div>
<p>You should see some output like this:</p>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia
1
*** Batch size 1, num NeuronCores 2 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc2) ***
INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia
1
*** Batch size 1, num NeuronCores 4 (input shape: (1, 224, 224, 3), saved model dir: rn50_fp16_compiled_b1_nc4) ***
INFO: Compilation finished in 95 seconds with 99.5% operations placed on Inferentia
1
... (outputs removed)
*** Batch size 5, num NeuronCores 16 (input shape: (5, 224, 224, 3), saved model dir: rn50_fp16_compiled_b5_nc16) ***
ERROR: Compilation finished in 120 seconds with less than 50% operations placed on Inferentia (0.0%)
INFO: Retry compilation without static weights
ERROR: Retry compilation finished in 137 seconds with less than 50% operations placed on Inferentia (0.0%)
0
The file full_sweep_results.txt shows a summary of the sweep results with latest Neuron 1/27/20 release (0 means compilation unsuccessful and 0 ops mapped to Inferentia, 1 means most ops mapped to Inferentia and non-static weights, 2 means most ops mapped to Inferentia and using static weights):
batch, nc1, nc2, nc4, nc8, nc12, nc16
1, 1, 1, 1, 2, 2, 2
2, 1, 1, 0, 1, 2, 2
3, 1, 1, 1, 1, 1, 1
4, 1, 1, 0, 1, 1, 1
5, 1, 1, 0, 0, 0, 0
</pre></div>
</div>
</div>
<div class="section" id="Inference">
<h2>Inference<a class="headerlink" href="#Inference" title="Permalink to this headline">#</a></h2>
<p>Run inference over different batch sizes and Neuroncore groups to obtain throughput and latency results for ResNet50. To apply dynamic batching, the user batch size is set to 10x the compiled batch size, in order to keep input queue full and to amortize framework-to-Neuron overhead.</p>
<p>Note: The results are based on the Neuron v1.12.2 (Mar 4th 2021) release. These will continue improve as we increase Neuron performance.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span><span class="nb">cd</span><span class="w"> </span>~/aws-neuron-sdk/src/examples/tensorflow/keras_resnet50/
<span class="o">!</span><span class="nb">echo</span><span class="w"> </span><span class="s2">""</span><span class="w"> </span>><span class="w"> </span>batch.log
<span class="o">!</span><span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span><span class="w"> </span>python<span class="w"> </span>infer_resnet50_keras_loadtest.py<span class="w"> </span>--batch_size<span class="o">=</span><span class="nv">$i</span><span class="w"> </span>--neuroncore-pipeline-cores<span class="o">=</span><span class="m">1</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>batch.log<span class="p">;</span><span class="w"> </span><span class="k">done</span>
<span class="o">!</span><span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span><span class="w"> </span>python<span class="w"> </span>infer_resnet50_keras_loadtest.py<span class="w"> </span>--batch_size<span class="o">=</span><span class="nv">$i</span><span class="w"> </span>--neuroncore-pipeline-cores<span class="o">=</span><span class="m">2</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>batch.log<span class="p">;</span><span class="w"> </span><span class="k">done</span>
<span class="o">!</span><span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span><span class="w"> </span>python<span class="w"> </span>infer_resnet50_keras_loadtest.py<span class="w"> </span>--batch_size<span class="o">=</span><span class="nv">$i</span><span class="w"> </span>--neuroncore-pipeline-cores<span class="o">=</span><span class="m">4</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>batch.log<span class="p">;</span><span class="w"> </span><span class="k">done</span>
<span class="o">!</span><span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span><span class="w"> </span>python<span class="w"> </span>infer_resnet50_keras_loadtest.py<span class="w"> </span>--batch_size<span class="o">=</span><span class="nv">$i</span><span class="w"> </span>--neuroncore-pipeline-cores<span class="o">=</span><span class="m">8</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>batch.log<span class="p">;</span><span class="w"> </span><span class="k">done</span>
<span class="o">!</span><span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span><span class="w"> </span>python<span class="w"> </span>infer_resnet50_keras_loadtest.py<span class="w"> </span>--batch_size<span class="o">=</span><span class="nv">$i</span><span class="w"> </span>--neuroncore-pipeline-cores<span class="o">=</span><span class="m">12</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>batch.log<span class="p">;</span><span class="w"> </span><span class="k">done</span>
<span class="o">!</span><span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="k">$(</span>seq<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="m">5</span><span class="k">)</span><span class="p">;</span><span class="w"> </span><span class="k">do</span><span class="w"> </span>python<span class="w"> </span>infer_resnet50_keras_loadtest.py<span class="w"> </span>--batch_size<span class="o">=</span><span class="nv">$i</span><span class="w"> </span>--neuroncore-pipeline-cores<span class="o">=</span><span class="m">16</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tee<span class="w"> </span>-a<span class="w"> </span>batch.log<span class="p">;</span><span class="w"> </span><span class="k">done</span>
</pre></div>
</div>
</div>
<p>The file batch.log now contains the results for each batch size. We can look at the throughput values to get an idea of which models are performing well. The output should look something like this:</p>
<p>The model best model configuration for throughput (if you run on an Inf1.6xlarge as suggested in the tutorial) is batch size 5 NeuronCore group size 2. Increasing batch size usually helps to increase throughput (up to a certain extent).</p>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>*** Compiled batch size 5, user batch size 10, num NeuronCores 2 (input shape: (10, 224, 224, 3), saved model dir: ./rn50_fp16_compiled_b5_nc2/1) ***
Instance type inf1.6xlarge with 16 NeuronCores
NEURON_MAX_NUM_INFERS (env): 5
NEURONCORE_GROUP_SIZES (env): 2,2,2,2,2,2,2,2
NUM THREADS: 16
NUM_LOOPS_PER_THREAD: 400
USER_BATCH_SIZE: 10
Throughput values collected:
[10680, 10700, 10660]
(rest of outputs removed)
</pre></div>
</div>
</div>
<div class="section" id="Known-Issues">
<h2>Known Issues<a class="headerlink" href="#Known-Issues" title="Permalink to this headline">#</a></h2>
<div class="section" id="Unable-to-compile-with-batch-and-num-NeuronCores-combination">
<h3>Unable to compile with batch and num NeuronCores combination<a class="headerlink" href="#Unable-to-compile-with-batch-and-num-NeuronCores-combination" title="Permalink to this headline">#</a></h3>
<p>For some combination of batch and number of NeuronCores setting, you may see an internal compiler error as below. Please see the sweep result above for Neuron 1/27/20 release. Furthermore, if using auto-casting to bfloat16 from FP32 network and batch size is larger than 1 would result in the same error.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>INFO:tensorflow:fusing<span class="w"> </span>subgraph<span class="w"> </span>neuron_op_a73aed4b95ca5d5b<span class="w"> </span>with<span class="w"> </span>neuron-cc<span class="p">;</span><span class="w"> </span>log<span class="w"> </span>file<span class="w"> </span>is<span class="w"> </span>at<span class="w"> </span>/home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neuron-cc.log
<span class="w"> </span>WARNING:tensorflow:Failed<span class="w"> </span>to<span class="w"> </span>fuse<span class="w"> </span>subgraph<span class="w"> </span>neuron_op_a73aed4b95ca5d5b<span class="w"> </span>with<span class="w"> </span><span class="s1">'/home/ubuntu/test_venv/bin/neuron-cc compile /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb --framework TENSORFLOW --pipeline compile SaveTemps --output /home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff --io-config "{\"inputs\": {\"input_10/_0:0\": [[6, 224, 224, 3], \"float16\"]}, \"outputs\": [\"probs/Softmax:0\"]}" --batching_en --rematerialization_en --sb_size 120 --spill_dis --enable-replication True'</span>
<span class="w"> </span>WARNING:tensorflow:neuron-cc<span class="w"> </span>error<span class="w"> </span>message:
<span class="w"> </span>WARNING:tensorflow:01/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>***************************************************************
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>An<span class="w"> </span>Internal<span class="w"> </span>Compiler<span class="w"> </span>Error<span class="w"> </span>has<span class="w"> </span>occurred
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>***************************************************************
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Please<span class="w"> </span>contact<span class="w"> </span>Customer<span class="w"> </span>Support<span class="w"> </span>and<span class="w"> </span>provide<span class="w"> </span>the<span class="w"> </span>following<span class="w"> </span>details.
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Error<span class="w"> </span>message:<span class="w"> </span>Non-zero<span class="w"> </span><span class="nb">exit</span><span class="w"> </span>status<span class="w"> </span><span class="o">(</span><span class="m">134</span><span class="o">)</span><span class="w"> </span><span class="k">for</span><span class="w"> </span>command:<span class="w"> </span>/home/ubuntu/test_venv/lib/python3.6/site-packages/neuroncc/starfish/bin/list_sch<span class="w"> </span>--hhir<span class="w"> </span>hh-tr-external-move.json<span class="w"> </span>--verbose<span class="w"> </span><span class="m">0</span><span class="w"> </span>--sb_size<span class="w"> </span><span class="m">120</span><span class="w"> </span>--arith_intensity_target<span class="w"> </span><span class="m">2300</span><span class="w"> </span>--sb_watermark_low<span class="w"> </span><span class="m">0</span>.250000<span class="w"> </span>--sb_watermark_high<span class="w"> </span><span class="m">0</span>.750000<span class="w"> </span>--sb_size_tol<span class="w"> </span><span class="m">1</span><span class="w"> </span>--alloc<span class="w"> </span>simple1<span class="w"> </span>--alloc_opt<span class="w"> </span>--depth_diff<span class="w"> </span><span class="m">0</span>.100000<span class="w"> </span>--verbose_start_cycle<span class="w"> </span><span class="m">0</span><span class="w"> </span>--tt_dist<span class="w"> </span>--mm_meet_cnt<span class="w"> </span><span class="m">1</span><span class="w"> </span>--load_speed_factor<span class="w"> </span><span class="m">0</span>.300000<span class="w"> </span>--schir<span class="w"> </span>sch_tmp.json<span class="w"> </span>--spill_depth_limit<span class="w"> </span><span class="m">5</span><span class="w"> </span>--spill_dis<span class="w"> </span>--true_dep<span class="w"> </span>--mm_order<span class="w"> </span>--batching_en<span class="w"> </span>--rematerialization_en
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Error<span class="w"> </span>class:<span class="w"> </span>CompilerInternalError
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Error<span class="w"> </span>location:<span class="w"> </span>job.Scheduler.3
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Command<span class="w"> </span>line:<span class="w"> </span>/home/ubuntu/test_venv/bin/neuron-cc<span class="w"> </span>compile<span class="w"> </span>/home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.pb<span class="w"> </span>--framework<span class="w"> </span>TENSORFLOW<span class="w"> </span>--pipeline<span class="w"> </span>compile<span class="w"> </span>SaveTemps<span class="w"> </span>--output<span class="w"> </span>/home/ubuntu/keras_fp16_benchmarking_db/compiler_workdir/neuron_op_a73aed4b95ca5d5b/graph_def.neff<span class="w"> </span>--io-config<span class="w"> </span><span class="s1">'{"inputs": {"input_10/_0:0": [[6, 224, 224, 3], "float16"]}, "outputs": ["probs/Softmax:0"]}'</span><span class="w"> </span>--batching_en<span class="w"> </span>--rematerialization_en<span class="w"> </span>--sb_size<span class="w"> </span><span class="m">120</span><span class="w"> </span>--spill_dis<span class="w"> </span>--enable-replication<span class="w"> </span>True
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Internal<span class="w"> </span>details:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>File<span class="w"> </span><span class="s2">"neuroncc/driver/Job.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">207</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>neuroncc.driver.Job.runSingleInputFn
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>File<span class="w"> </span><span class="s2">"neuroncc/driver/jobs/Scheduler.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">58</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>neuroncc.driver.jobs.Scheduler.Scheduler.runSingleInput
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>File<span class="w"> </span><span class="s2">"neuroncc/driver/Job.py"</span>,<span class="w"> </span>line<span class="w"> </span><span class="m">145</span>,<span class="w"> </span><span class="k">in</span><span class="w"> </span>neuroncc.driver.Job.Job.shellCommand
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:40<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Version<span class="w"> </span>information:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>Neuron<span class="w"> </span>Compiler<span class="w"> </span>version<span class="w"> </span><span class="m">1</span>.0.6632.0+6001610955
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>HWM<span class="w"> </span>version<span class="w"> </span><span class="m">1</span>.0.839.0-6001300654
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>NEFF<span class="w"> </span>version<span class="w"> </span><span class="m">0</span>.6
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>TVM<span class="w"> </span>version<span class="w"> </span><span class="m">1</span>.0.1589.0+6001610955
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>NumPy<span class="w"> </span>version<span class="w"> </span><span class="m">1</span>.16.5
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>MXNet<span class="w"> </span>not<span class="w"> </span>available
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:<span class="w"> </span>TF<span class="w"> </span>version<span class="w"> </span><span class="m">1</span>.15.0
<span class="w"> </span><span class="m">01</span>/23/2020<span class="w"> </span><span class="m">01</span>:15:41<span class="w"> </span>AM<span class="w"> </span>ERROR<span class="w"> </span><span class="o">[</span>neuron-cc<span class="o">]</span>:
</pre></div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Running SSD300 with AWS Neuron</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Natural Language Processing (NLP) Tutorials (<code class="docutils literal notranslate"><span class="pre">tensorflow-neuron</span></code>)</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:51.014Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron-inference.rst.txt
|
```
.. _inference-tensorflow-neuron:
Inference on Inf1 (``tensorflow-neuron``)
=========================================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron>
Additional Examples </frameworks/tensorflow/tensorflow-neuron/additional-examples>
API Reference Guide </frameworks/tensorflow/tensorflow-neuron/api-reference-guide>
Misc </frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron>
.. include:: tensorflow-neuron-inference.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inference-tensorflow-neuron:
Inference on Inf1 (``tensorflow-neuron``)
=========================================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron>
Additional Examples </frameworks/tensorflow/tensorflow-neuron/additional-examples>
API Reference Guide </frameworks/tensorflow/tensorflow-neuron/api-reference-guide>
Misc </frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron>
.. include:: tensorflow-neuron-inference.txt</pre></body></html>
|
2023-09-29T20:54:51.024Z
|
|
Running OpenPose on Inferentia — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/openpose_demo/openpose.html
|
# Running OpenPose on Inferentia — AWS Neuron Documentation
```
"""
Usage: python convert_graph_opt.py /path/to/graph_opt.pb /path/to/graph_opt_neuron.pb
"""
#import argparse
import numpy as np
import tensorflow as tf
from tensorflow.core.framework.tensor_shape_pb2 import TensorShapeProto
import tensorflow.neuron as tfn
def compile():
#parser = argparse.ArgumentParser()
#parser.add_argument('input_pb_path', help='Input serialized GraphDef protobuf')
#parser.add_argument('output_pb_path', help='Ouput serialized GraphDef protobuf')
#parser.add_argument('--net_resolution', default='656x368', help='Network resolution in WxH format, e. g., --net_resolution=656x368')
#parser.add_argument('--debug_verify', action='store_true')
#args = parser.parse_args()
input_pb_path = './graph_opt.pb'
net_resolution = '656x368'
output_pb_path = './graph_opt_neuron_' + net_resolution + '.pb'
debug_verify = 'store_true'
dim_w, dim_h = net_resolution.split('x')
dim_w = int(dim_w)
dim_h = int(dim_h)
graph_def = tf.GraphDef()
with open(input_pb_path, 'rb') as f:
graph_def.ParseFromString(f.read())
if debug_verify:
np.random.seed(0)
feed_dict = {'image:0': np.random.rand(1, dim_h, dim_w, 3)}
output_name = 'Openpose/concat_stage7:0'
with tf.Session(graph=tf.Graph()) as sess:
tf.import_graph_def(graph_def, name='')
result_reference = sess.run(output_name, feed_dict)
preprocessing_ops = {'preprocess_divide', 'preprocess_divide/y', 'preprocess_subtract', 'preprocess_subtract/y'}
graph_def = nhwc_to_nchw(graph_def, preprocessing_ops)
graph_def = inline_float32_to_float16(graph_def, preprocessing_ops)
with tf.Session(graph=tf.Graph()) as sess:
tf.import_graph_def(graph_def, name='')
no_fuse_ops = preprocessing_ops.union({'Openpose/concat_stage7'})
infer_graph = tfn.graph_util.inference_graph_from_session(
sess, shape_feed_dict={'image:0': [1, dim_h, dim_w, 3]}, output_tensors=['Openpose/concat_stage7:0'],
no_fuse_ops=no_fuse_ops, dynamic_batch_size=True,
)
with open(output_pb_path, 'wb') as f:
f.write(infer_graph.as_graph_def().SerializeToString())
if debug_verify:
with tf.Session(graph=infer_graph) as sess:
result_compiled = sess.run(output_name, feed_dict)
np.testing.assert_allclose(result_compiled, result_reference, rtol=1e-2, atol=1e-3)
def inline_float32_to_float16(graph_def, preprocessing_ops):
float32_enum = tf.float32.as_datatype_enum
float16_enum = tf.float16.as_datatype_enum
graph = tf.Graph()
with graph.as_default():
tf.import_graph_def(graph_def, name='')
graph_def = graph.as_graph_def()
for node in graph_def.node:
if node.name in preprocessing_ops or node.op == 'Placeholder':
cast_input_node_name = node.name
continue
if node.op == 'Const':
if node.attr['dtype'].type == float32_enum:
node.attr['dtype'].type = float16_enum
tensor_def = node.attr['value'].tensor
tensor_def.dtype = float16_enum
if tensor_def.tensor_content:
const_np = np.frombuffer(tensor_def.tensor_content, dtype=np.float32).astype(np.float16)
tensor_def.tensor_content = const_np.tobytes()
elif len(tensor_def.float_val):
const_np = np.array(tensor_def.float_val).astype(np.float16).view(np.uint16)
tensor_def.float_val[:] = []
tensor_def.half_val[:] = list(const_np)
else:
raise NotImplementedError
elif 'T' in node.attr and node.attr['T'].type == float32_enum:
node.attr['T'].type = float16_enum
for node in graph_def.node:
if node.name == cast_input_node_name:
node.name = '{}_PreCastFloat32ToFlot16'.format(node.name)
input_node = node
break
cast_input_node = _gen_cast_node_def(cast_input_node_name, tf.float16, input_node)
output_node = graph_def.node[-1]
cast_output_node_name = output_node.name
output_node.name = '{}_PreCastFloat16ToFlot32'.format(output_node.name)
cast_output_node = _gen_cast_node_def(cast_output_node_name, tf.float32, output_node)
preprocessing_ops.add(input_node.name)
new_graph_def = tf.GraphDef()
new_graph_def.node.extend(graph_def.node)
new_graph_def.node.append(cast_input_node)
new_graph_def.node.append(cast_output_node)
graph = tf.Graph()
with graph.as_default():
tf.import_graph_def(new_graph_def, name='')
return graph.as_graph_def()
def nhwc_to_nchw(graph_def, preprocessing_ops):
graph = tf.Graph()
with graph.as_default():
tf.import_graph_def(graph_def, name='')
graph_def = graph.as_graph_def()
node_name_to_node = {node.name: node for node in graph_def.node}
for node in graph_def.node:
if node.name in preprocessing_ops or node.op == 'Placeholder':
transpose_input_node_name = node.name
continue
if node.op == 'Conv2D':
node.attr['data_format'].s = b'NCHW'
strides = node.attr['strides'].list.i
strides[:] = [strides[0], strides[3], strides[1], strides[2]]
elif node.op == 'BiasAdd':
if node.name != 'probs/BiasAdd':
node.attr['data_format'].s = b'NCHW'
elif node.op == 'MaxPool':
node.attr['data_format'].s = b'NCHW'
ksize = node.attr['ksize'].list.i
ksize[:] = [ksize[0], ksize[3], ksize[1], ksize[2]]
strides = node.attr['strides'].list.i
strides[:] = [strides[0], strides[3], strides[1], strides[2]]
elif node.op in {'Concat', 'ConcatV2'}:
node_axes = node_name_to_node[node.input[-1]]
node_axes.attr['value'].tensor.int_val[:] = [1]
for node in graph_def.node:
if node.name == transpose_input_node_name:
node.name = '{}_PreTransposeNHWC2NCHW'.format(node.name)
input_node = node
break
transpose_input_node, transpose_input_perm_node = _gen_transpose_def(transpose_input_node_name, [0, 3, 1, 2], input_node)
output_node = graph_def.node[-1]
transpose_output_node_name = output_node.name
output_node.name = '{}_PreTransposeNCHW2NHWC'.format(output_node.name)
transpose_output_node, transpose_output_perm_node = _gen_transpose_def(transpose_output_node_name, [0, 2, 3, 1], output_node)
preprocessing_ops.add(input_node.name)
preprocessing_ops.add(transpose_input_perm_node.name)
new_graph_def = tf.GraphDef()
new_graph_def.node.extend(graph_def.node)
new_graph_def.node.append(transpose_input_perm_node)
new_graph_def.node.append(transpose_input_node)
new_graph_def.node.append(transpose_output_perm_node)
new_graph_def.node.append(transpose_output_node)
graph = tf.Graph()
with graph.as_default():
tf.import_graph_def(new_graph_def, name='')
return graph.as_graph_def()
def _gen_cast_node_def(name, target_dtype, input_node):
cast_node = tf.NodeDef(name=name, op='Cast')
cast_node.input.append(input_node.name)
cast_node.attr['DstT'].type = target_dtype.as_datatype_enum
cast_node.attr['SrcT'].type = input_node.attr['T'].type
cast_node.attr['Truncate'].b = False
return cast_node
def _gen_transpose_def(name, perm, input_node):
perm_node = tf.NodeDef(name='{}/perm'.format(name), op='Const')
perm_node.attr['dtype'].type = tf.int32.as_datatype_enum
tensor_def = perm_node.attr['value'].tensor
tensor_def.dtype = tf.int32.as_datatype_enum
tensor_def.tensor_shape.dim.append(TensorShapeProto.Dim(size=4))
tensor_def.tensor_content = np.array(perm, dtype=np.int32).tobytes()
transpose_node = tf.NodeDef(name=name, op='Transpose')
transpose_node.input.append(input_node.name)
transpose_node.input.append(perm_node.name)
transpose_node.attr['T'].type = input_node.attr['T'].type
transpose_node.attr['Tperm'].type = tf.int32.as_datatype_enum
return transpose_node, perm_node
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Running OpenPose on Inferentia — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="Running ResNet50 on Inferentia" href="../tensorflow_resnet50/resnet50.html">
<link rel="prev" title="Computer Vision Tutorials (tensorflow-neuron)" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/tensorflow/openpose_demo/openpose", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/tensorflow/openpose_demo/openpose.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/tensorflow/openpose_demo/openpose.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/tensorflow/openpose_demo/openpose.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Acknowledgement:">
Acknowledgement:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-tensorflow-pose-net-frozen-graph.">
Download tensorflow pose net frozen graph.
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile">
Compile
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy">
Deploy
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Running OpenPose on Inferentia</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Acknowledgement:">
Acknowledgement:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-tensorflow-pose-net-frozen-graph.">
Download tensorflow pose net frozen graph.
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile">
Compile
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy">
Deploy
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Running-OpenPose-on-Inferentia">
<h1>Running OpenPose on Inferentia<a class="headerlink" href="#Running-OpenPose-on-Inferentia" title="Permalink to this headline">#</a></h1>
<div class="section" id="Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
<h2>Note: this tutorial runs on tensorflow-neuron 1.x only<a class="headerlink" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only" title="Permalink to this headline">#</a></h2>
</div>
<div class="section" id="Introduction:">
<h2>Introduction:<a class="headerlink" href="#Introduction:" title="Permalink to this headline">#</a></h2>
<p>In this tutorial we will compile and deploy Openpose model for Inferentia. This jupyter notebook should run on an inf1.6xlarge instance for compilation and inference. The inference part of this tutorial requires inf1.6xlarge and not the compilation itself. For simplicity we will run this tutorial on a single instance but in real life scenario the compilation can be done on a compute c5.4xlarge instance and the deployment on the inf1 instance family.</p>
<p>In this tutorial we provide two main sections: 1. Compile the OpenPose model on inf1x6large. 2. Infer the same compiled model on inf1x6large.</p>
<p>Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the <a class="reference external" href="../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow">Tensorflow Installation Guide</a>. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.</p>
</div>
<div class="section" id="Acknowledgement:">
<h2>Acknowledgement:<a class="headerlink" href="#Acknowledgement:" title="Permalink to this headline">#</a></h2>
<p>Many thanks to <a class="reference external" href="https://github.com/ildoonet">https://github.com/ildoonet</a> for providing pretrained model as well as the image preprocessing/pose estimating infrastructure.</p>
</div>
<div class="section" id="Download-tensorflow-pose-net-frozen-graph.">
<h2>Download tensorflow pose net frozen graph.<a class="headerlink" href="#Download-tensorflow-pose-net-frozen-graph." title="Permalink to this headline">#</a></h2>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>wget<span class="w"> </span>-c<span class="w"> </span>--tries<span class="o">=</span><span class="m">2</span><span class="w"> </span><span class="k">$(</span><span class="w"> </span>wget<span class="w"> </span>-q<span class="w"> </span>-O<span class="w"> </span>-<span class="w"> </span>http://www.mediafire.com/file/qlzzr20mpocnpa3/graph_opt.pb<span class="w"> </span><span class="p">|</span><span class="w"> </span>grep<span class="w"> </span>-o<span class="w"> </span><span class="s1">'http*://download[^"]*'</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tail<span class="w"> </span>-n<span class="w"> </span><span class="m">1</span><span class="w"> </span><span class="k">)</span><span class="w"> </span>-O<span class="w"> </span>graph_opt.pb
<br><br></pre></div>
</div>
</div>
</div>
<div class="section" id="Compile">
<h2>Compile<a class="headerlink" href="#Compile" title="Permalink to this headline">#</a></h2>
<p>Compile the pose net frozen graph into AWS Neuron compatible form. Network input image resolution is adjustable with argument –net_resolution (e. g., –net_resolution=656x368). The compiled model can accept arbitrary batch size input at runtime.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="sd">"""</span>
<span class="sd">Usage: python convert_graph_opt.py /path/to/graph_opt.pb /path/to/graph_opt_neuron.pb</span>
<span class="sd">"""</span>
<span class="c1">#import argparse</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">tensorflow.core.framework.tensor_shape_pb2</span> <span class="kn">import</span> <span class="n">TensorShapeProto</span>
<span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span>
<span class="k">def</span> <span class="nf">compile</span><span class="p">():</span>
<span class="c1">#parser = argparse.ArgumentParser()</span>
<span class="c1">#parser.add_argument('input_pb_path', help='Input serialized GraphDef protobuf')</span>
<span class="c1">#parser.add_argument('output_pb_path', help='Ouput serialized GraphDef protobuf')</span>
<span class="c1">#parser.add_argument('--net_resolution', default='656x368', help='Network resolution in WxH format, e. g., --net_resolution=656x368')</span>
<span class="c1">#parser.add_argument('--debug_verify', action='store_true')</span>
<span class="c1">#args = parser.parse_args()</span>
<span class="n">input_pb_path</span> <span class="o">=</span> <span class="s1">'./graph_opt.pb'</span>
<span class="n">net_resolution</span> <span class="o">=</span> <span class="s1">'656x368'</span>
<span class="n">output_pb_path</span> <span class="o">=</span> <span class="s1">'./graph_opt_neuron_'</span> <span class="o">+</span> <span class="n">net_resolution</span> <span class="o">+</span> <span class="s1">'.pb'</span>
<span class="n">debug_verify</span> <span class="o">=</span> <span class="s1">'store_true'</span>
<span class="n">dim_w</span><span class="p">,</span> <span class="n">dim_h</span> <span class="o">=</span> <span class="n">net_resolution</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">'x'</span><span class="p">)</span>
<span class="n">dim_w</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">dim_w</span><span class="p">)</span>
<span class="n">dim_h</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">dim_h</span><span class="p">)</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">input_pb_path</span><span class="p">,</span> <span class="s1">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">graph_def</span><span class="o">.</span><span class="n">ParseFromString</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">())</span>
<span class="k">if</span> <span class="n">debug_verify</span><span class="p">:</span>
<span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">seed</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="n">feed_dict</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'image:0'</span><span class="p">:</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">dim_h</span><span class="p">,</span> <span class="n">dim_w</span><span class="p">,</span> <span class="mi">3</span><span class="p">)}</span>
<span class="n">output_name</span> <span class="o">=</span> <span class="s1">'Openpose/concat_stage7:0'</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">())</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">''</span><span class="p">)</span>
<span class="n">result_reference</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">output_name</span><span class="p">,</span> <span class="n">feed_dict</span><span class="p">)</span>
<span class="n">preprocessing_ops</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'preprocess_divide'</span><span class="p">,</span> <span class="s1">'preprocess_divide/y'</span><span class="p">,</span> <span class="s1">'preprocess_subtract'</span><span class="p">,</span> <span class="s1">'preprocess_subtract/y'</span><span class="p">}</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">nhwc_to_nchw</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">preprocessing_ops</span><span class="p">)</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">inline_float32_to_float16</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">preprocessing_ops</span><span class="p">)</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">())</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">''</span><span class="p">)</span>
<span class="n">no_fuse_ops</span> <span class="o">=</span> <span class="n">preprocessing_ops</span><span class="o">.</span><span class="n">union</span><span class="p">({</span><span class="s1">'Openpose/concat_stage7'</span><span class="p">})</span>
<span class="n">infer_graph</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">graph_util</span><span class="o">.</span><span class="n">inference_graph_from_session</span><span class="p">(</span>
<span class="n">sess</span><span class="p">,</span> <span class="n">shape_feed_dict</span><span class="o">=</span><span class="p">{</span><span class="s1">'image:0'</span><span class="p">:</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="n">dim_h</span><span class="p">,</span> <span class="n">dim_w</span><span class="p">,</span> <span class="mi">3</span><span class="p">]},</span> <span class="n">output_tensors</span><span class="o">=</span><span class="p">[</span><span class="s1">'Openpose/concat_stage7:0'</span><span class="p">],</span>
<span class="n">no_fuse_ops</span><span class="o">=</span><span class="n">no_fuse_ops</span><span class="p">,</span> <span class="n">dynamic_batch_size</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">output_pb_path</span><span class="p">,</span> <span class="s1">'wb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">f</span><span class="o">.</span><span class="n">write</span><span class="p">(</span><span class="n">infer_graph</span><span class="o">.</span><span class="n">as_graph_def</span><span class="p">()</span><span class="o">.</span><span class="n">SerializeToString</span><span class="p">())</span>
<span class="k">if</span> <span class="n">debug_verify</span><span class="p">:</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">infer_graph</span><span class="p">)</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">result_compiled</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">output_name</span><span class="p">,</span> <span class="n">feed_dict</span><span class="p">)</span>
<span class="n">np</span><span class="o">.</span><span class="n">testing</span><span class="o">.</span><span class="n">assert_allclose</span><span class="p">(</span><span class="n">result_compiled</span><span class="p">,</span> <span class="n">result_reference</span><span class="p">,</span> <span class="n">rtol</span><span class="o">=</span><span class="mf">1e-2</span><span class="p">,</span> <span class="n">atol</span><span class="o">=</span><span class="mf">1e-3</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">inline_float32_to_float16</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">preprocessing_ops</span><span class="p">):</span>
<span class="n">float32_enum</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="o">.</span><span class="n">as_datatype_enum</span>
<span class="n">float16_enum</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">float16</span><span class="o">.</span><span class="n">as_datatype_enum</span>
<span class="n">graph</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">()</span>
<span class="k">with</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_default</span><span class="p">():</span>
<span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">''</span><span class="p">)</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_graph_def</span><span class="p">()</span>
<span class="k">for</span> <span class="n">node</span> <span class="ow">in</span> <span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">:</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="ow">in</span> <span class="n">preprocessing_ops</span> <span class="ow">or</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s1">'Placeholder'</span><span class="p">:</span>
<span class="n">cast_input_node_name</span> <span class="o">=</span> <span class="n">node</span><span class="o">.</span><span class="n">name</span>
<span class="k">continue</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s1">'Const'</span><span class="p">:</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'dtype'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">==</span> <span class="n">float32_enum</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'dtype'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">=</span> <span class="n">float16_enum</span>
<span class="n">tensor_def</span> <span class="o">=</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'value'</span><span class="p">]</span><span class="o">.</span><span class="n">tensor</span>
<span class="n">tensor_def</span><span class="o">.</span><span class="n">dtype</span> <span class="o">=</span> <span class="n">float16_enum</span>
<span class="k">if</span> <span class="n">tensor_def</span><span class="o">.</span><span class="n">tensor_content</span><span class="p">:</span>
<span class="n">const_np</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">frombuffer</span><span class="p">(</span><span class="n">tensor_def</span><span class="o">.</span><span class="n">tensor_content</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">float16</span><span class="p">)</span>
<span class="n">tensor_def</span><span class="o">.</span><span class="n">tensor_content</span> <span class="o">=</span> <span class="n">const_np</span><span class="o">.</span><span class="n">tobytes</span><span class="p">()</span>
<span class="k">elif</span> <span class="nb">len</span><span class="p">(</span><span class="n">tensor_def</span><span class="o">.</span><span class="n">float_val</span><span class="p">):</span>
<span class="n">const_np</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">tensor_def</span><span class="o">.</span><span class="n">float_val</span><span class="p">)</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">float16</span><span class="p">)</span><span class="o">.</span><span class="n">view</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">uint16</span><span class="p">)</span>
<span class="n">tensor_def</span><span class="o">.</span><span class="n">float_val</span><span class="p">[:]</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">tensor_def</span><span class="o">.</span><span class="n">half_val</span><span class="p">[:]</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">const_np</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">raise</span> <span class="ne">NotImplementedError</span>
<span class="k">elif</span> <span class="s1">'T'</span> <span class="ow">in</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span> <span class="ow">and</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'T'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">==</span> <span class="n">float32_enum</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'T'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">=</span> <span class="n">float16_enum</span>
<span class="k">for</span> <span class="n">node</span> <span class="ow">in</span> <span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">:</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="o">==</span> <span class="n">cast_input_node_name</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="s1">'</span><span class="si">{}</span><span class="s1">_PreCastFloat32ToFlot16'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">input_node</span> <span class="o">=</span> <span class="n">node</span>
<span class="k">break</span>
<span class="n">cast_input_node</span> <span class="o">=</span> <span class="n">_gen_cast_node_def</span><span class="p">(</span><span class="n">cast_input_node_name</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float16</span><span class="p">,</span> <span class="n">input_node</span><span class="p">)</span>
<span class="n">output_node</span> <span class="o">=</span> <span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<span class="n">cast_output_node_name</span> <span class="o">=</span> <span class="n">output_node</span><span class="o">.</span><span class="n">name</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="s1">'</span><span class="si">{}</span><span class="s1">_PreCastFloat16ToFlot32'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">output_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">cast_output_node</span> <span class="o">=</span> <span class="n">_gen_cast_node_def</span><span class="p">(</span><span class="n">cast_output_node_name</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">,</span> <span class="n">output_node</span><span class="p">)</span>
<span class="n">preprocessing_ops</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">input_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">new_graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">extend</span><span class="p">(</span><span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">)</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">cast_input_node</span><span class="p">)</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">cast_output_node</span><span class="p">)</span>
<span class="n">graph</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">()</span>
<span class="k">with</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_default</span><span class="p">():</span>
<span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">new_graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">''</span><span class="p">)</span>
<span class="k">return</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_graph_def</span><span class="p">()</span>
<span class="k">def</span> <span class="nf">nhwc_to_nchw</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">preprocessing_ops</span><span class="p">):</span>
<span class="n">graph</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">()</span>
<span class="k">with</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_default</span><span class="p">():</span>
<span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">''</span><span class="p">)</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_graph_def</span><span class="p">()</span>
<span class="n">node_name_to_node</span> <span class="o">=</span> <span class="p">{</span><span class="n">node</span><span class="o">.</span><span class="n">name</span><span class="p">:</span> <span class="n">node</span> <span class="k">for</span> <span class="n">node</span> <span class="ow">in</span> <span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">}</span>
<span class="k">for</span> <span class="n">node</span> <span class="ow">in</span> <span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">:</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="ow">in</span> <span class="n">preprocessing_ops</span> <span class="ow">or</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s1">'Placeholder'</span><span class="p">:</span>
<span class="n">transpose_input_node_name</span> <span class="o">=</span> <span class="n">node</span><span class="o">.</span><span class="n">name</span>
<span class="k">continue</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s1">'Conv2D'</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'data_format'</span><span class="p">]</span><span class="o">.</span><span class="n">s</span> <span class="o">=</span> <span class="sa">b</span><span class="s1">'NCHW'</span>
<span class="n">strides</span> <span class="o">=</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'strides'</span><span class="p">]</span><span class="o">.</span><span class="n">list</span><span class="o">.</span><span class="n">i</span>
<span class="n">strides</span><span class="p">[:]</span> <span class="o">=</span> <span class="p">[</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">strides</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">strides</span><span class="p">[</span><span class="mi">2</span><span class="p">]]</span>
<span class="k">elif</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s1">'BiasAdd'</span><span class="p">:</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="o">!=</span> <span class="s1">'probs/BiasAdd'</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'data_format'</span><span class="p">]</span><span class="o">.</span><span class="n">s</span> <span class="o">=</span> <span class="sa">b</span><span class="s1">'NCHW'</span>
<span class="k">elif</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="o">==</span> <span class="s1">'MaxPool'</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'data_format'</span><span class="p">]</span><span class="o">.</span><span class="n">s</span> <span class="o">=</span> <span class="sa">b</span><span class="s1">'NCHW'</span>
<span class="n">ksize</span> <span class="o">=</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'ksize'</span><span class="p">]</span><span class="o">.</span><span class="n">list</span><span class="o">.</span><span class="n">i</span>
<span class="n">ksize</span><span class="p">[:]</span> <span class="o">=</span> <span class="p">[</span><span class="n">ksize</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">ksize</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="n">ksize</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">ksize</span><span class="p">[</span><span class="mi">2</span><span class="p">]]</span>
<span class="n">strides</span> <span class="o">=</span> <span class="n">node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'strides'</span><span class="p">]</span><span class="o">.</span><span class="n">list</span><span class="o">.</span><span class="n">i</span>
<span class="n">strides</span><span class="p">[:]</span> <span class="o">=</span> <span class="p">[</span><span class="n">strides</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">strides</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="n">strides</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">strides</span><span class="p">[</span><span class="mi">2</span><span class="p">]]</span>
<span class="k">elif</span> <span class="n">node</span><span class="o">.</span><span class="n">op</span> <span class="ow">in</span> <span class="p">{</span><span class="s1">'Concat'</span><span class="p">,</span> <span class="s1">'ConcatV2'</span><span class="p">}:</span>
<span class="n">node_axes</span> <span class="o">=</span> <span class="n">node_name_to_node</span><span class="p">[</span><span class="n">node</span><span class="o">.</span><span class="n">input</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]]</span>
<span class="n">node_axes</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'value'</span><span class="p">]</span><span class="o">.</span><span class="n">tensor</span><span class="o">.</span><span class="n">int_val</span><span class="p">[:]</span> <span class="o">=</span> <span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="k">for</span> <span class="n">node</span> <span class="ow">in</span> <span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">:</span>
<span class="k">if</span> <span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="o">==</span> <span class="n">transpose_input_node_name</span><span class="p">:</span>
<span class="n">node</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="s1">'</span><span class="si">{}</span><span class="s1">_PreTransposeNHWC2NCHW'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">input_node</span> <span class="o">=</span> <span class="n">node</span>
<span class="k">break</span>
<span class="n">transpose_input_node</span><span class="p">,</span> <span class="n">transpose_input_perm_node</span> <span class="o">=</span> <span class="n">_gen_transpose_def</span><span class="p">(</span><span class="n">transpose_input_node_name</span><span class="p">,</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">],</span> <span class="n">input_node</span><span class="p">)</span>
<span class="n">output_node</span> <span class="o">=</span> <span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<span class="n">transpose_output_node_name</span> <span class="o">=</span> <span class="n">output_node</span><span class="o">.</span><span class="n">name</span>
<span class="n">output_node</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="s1">'</span><span class="si">{}</span><span class="s1">_PreTransposeNCHW2NHWC'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">output_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">transpose_output_node</span><span class="p">,</span> <span class="n">transpose_output_perm_node</span> <span class="o">=</span> <span class="n">_gen_transpose_def</span><span class="p">(</span><span class="n">transpose_output_node_name</span><span class="p">,</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">1</span><span class="p">],</span> <span class="n">output_node</span><span class="p">)</span>
<span class="n">preprocessing_ops</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">input_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">preprocessing_ops</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">transpose_input_perm_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">new_graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">extend</span><span class="p">(</span><span class="n">graph_def</span><span class="o">.</span><span class="n">node</span><span class="p">)</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">transpose_input_perm_node</span><span class="p">)</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">transpose_input_node</span><span class="p">)</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">transpose_output_perm_node</span><span class="p">)</span>
<span class="n">new_graph_def</span><span class="o">.</span><span class="n">node</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">transpose_output_node</span><span class="p">)</span>
<span class="n">graph</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">()</span>
<span class="k">with</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_default</span><span class="p">():</span>
<span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">new_graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">''</span><span class="p">)</span>
<span class="k">return</span> <span class="n">graph</span><span class="o">.</span><span class="n">as_graph_def</span><span class="p">()</span>
<span class="k">def</span> <span class="nf">_gen_cast_node_def</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="n">target_dtype</span><span class="p">,</span> <span class="n">input_node</span><span class="p">):</span>
<span class="n">cast_node</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">NodeDef</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">name</span><span class="p">,</span> <span class="n">op</span><span class="o">=</span><span class="s1">'Cast'</span><span class="p">)</span>
<span class="n">cast_node</span><span class="o">.</span><span class="n">input</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">input_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">cast_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'DstT'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">=</span> <span class="n">target_dtype</span><span class="o">.</span><span class="n">as_datatype_enum</span>
<span class="n">cast_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'SrcT'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">=</span> <span class="n">input_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'T'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span>
<span class="n">cast_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'Truncate'</span><span class="p">]</span><span class="o">.</span><span class="n">b</span> <span class="o">=</span> <span class="kc">False</span>
<span class="k">return</span> <span class="n">cast_node</span>
<span class="k">def</span> <span class="nf">_gen_transpose_def</span><span class="p">(</span><span class="n">name</span><span class="p">,</span> <span class="n">perm</span><span class="p">,</span> <span class="n">input_node</span><span class="p">):</span>
<span class="n">perm_node</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">NodeDef</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="s1">'</span><span class="si">{}</span><span class="s1">/perm'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">name</span><span class="p">),</span> <span class="n">op</span><span class="o">=</span><span class="s1">'Const'</span><span class="p">)</span>
<span class="n">perm_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'dtype'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="o">.</span><span class="n">as_datatype_enum</span>
<span class="n">tensor_def</span> <span class="o">=</span> <span class="n">perm_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'value'</span><span class="p">]</span><span class="o">.</span><span class="n">tensor</span>
<span class="n">tensor_def</span><span class="o">.</span><span class="n">dtype</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="o">.</span><span class="n">as_datatype_enum</span>
<span class="n">tensor_def</span><span class="o">.</span><span class="n">tensor_shape</span><span class="o">.</span><span class="n">dim</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">TensorShapeProto</span><span class="o">.</span><span class="n">Dim</span><span class="p">(</span><span class="n">size</span><span class="o">=</span><span class="mi">4</span><span class="p">))</span>
<span class="n">tensor_def</span><span class="o">.</span><span class="n">tensor_content</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">perm</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">int32</span><span class="p">)</span><span class="o">.</span><span class="n">tobytes</span><span class="p">()</span>
<span class="n">transpose_node</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">NodeDef</span><span class="p">(</span><span class="n">name</span><span class="o">=</span><span class="n">name</span><span class="p">,</span> <span class="n">op</span><span class="o">=</span><span class="s1">'Transpose'</span><span class="p">)</span>
<span class="n">transpose_node</span><span class="o">.</span><span class="n">input</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">input_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">transpose_node</span><span class="o">.</span><span class="n">input</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">perm_node</span><span class="o">.</span><span class="n">name</span><span class="p">)</span>
<span class="n">transpose_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'T'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">=</span> <span class="n">input_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'T'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span>
<span class="n">transpose_node</span><span class="o">.</span><span class="n">attr</span><span class="p">[</span><span class="s1">'Tperm'</span><span class="p">]</span><span class="o">.</span><span class="n">type</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="o">.</span><span class="n">as_datatype_enum</span>
<span class="k">return</span> <span class="n">transpose_node</span><span class="p">,</span> <span class="n">perm_node</span>
<br></pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="nb">compile</span><span class="p">()</span>
<span class="c1"># Sample output will look like below:</span>
<span class="c1"># WARNING:tensorflow:From <ipython-input-3-27d3844cd753>:47: inference_graph_from_session (from tensorflow_neuron.python.graph_util) is deprecated and will be removed in a future version.</span>
<span class="c1"># Instructions for updating:</span>
<span class="c1"># Please refer to AWS documentation on Neuron integrated TensorFlow 2.0.</span>
<span class="c1"># INFO:tensorflow:Froze 0 variables.</span>
<span class="c1"># INFO:tensorflow:Converted 0 variables to const ops.</span>
<span class="c1"># INFO:tensorflow:fusing subgraph {subgraph neuron_op_ed41d2deb8c54255 with input tensors ["<tf.Tensor 'preprocess_subtract0/_0:0' shape=(1, 3, 368, 656) dtype=float16>"], output tensors ["<tf.Tensor 'Openpose/concat_stage7_PreCastFloat16ToFlot32:0' shape=(1, 46, 82, 57) dtype=float16>"]} with neuron-cc</span>
<span class="c1"># INFO:tensorflow:Number of operations in TensorFlow session: 474</span>
<span class="c1"># INFO:tensorflow:Number of operations after tf.neuron optimizations: 474</span>
<span class="c1"># INFO:tensorflow:Number of operations placed on Neuron runtime: 465</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Deploy">
<h2>Deploy<a class="headerlink" href="#Deploy" title="Permalink to this headline">#</a></h2>
<p>Using same instance to deploy the model. In case of different deployment instance, launch a deployment inf1 instance and copy the AWS Neuron optimized tensorflow frozen graph graph_opt_neuron_656x368.pb to the deployment inf1 instance. The smallest instance type inf1.xlarge is sufficient for this demo.</p>
<p>Your graph_opt_neuron_656x368.pb can now be plugged into <a class="reference external" href="https://github.com/ildoonet">https://github.com/ildoonet</a> seemlessly if you have tensorflow-neuron installed. When it is used at runtime, please ensure that the image resolution is the same as compile-time image resolution, i. e., 656x368.</p>
<p>Measure performance on the compiled frozen graph using dummy inputs.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="sd">"""</span>
<span class="sd">Copyright (C) 2020, Amazon.com. All Rights Reserved</span>
<span class="sd">"""</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">atexit</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">math</span>
<span class="kn">import</span> <span class="nn">json</span>
<span class="kn">from</span> <span class="nn">collections</span> <span class="kn">import</span> <span class="n">OrderedDict</span><span class="p">,</span> <span class="n">Counter</span>
<span class="kn">from</span> <span class="nn">contextlib</span> <span class="kn">import</span> <span class="n">contextmanager</span><span class="p">,</span> <span class="n">ContextDecorator</span>
<span class="kn">from</span> <span class="nn">functools</span> <span class="kn">import</span> <span class="n">wraps</span>
<span class="kn">from</span> <span class="nn">tensorflow.python.client</span> <span class="kn">import</span> <span class="n">session</span>
<span class="kn">from</span> <span class="nn">tensorflow.python.platform</span> <span class="kn">import</span> <span class="n">tf_logging</span> <span class="k">as</span> <span class="n">logging</span>
<span class="k">class</span> <span class="nc">measure_performance</span><span class="p">(</span><span class="n">ContextDecorator</span><span class="p">):</span>
<span class="w"> </span><span class="sd">"""Convenient tool for performance measurements.</span>
<span class="sd"> Can be apply on tensorflow session.run, tf-serving unary gRPC calls, or a given custom function.</span>
<span class="sd"> Usage:</span>
<span class="sd"> To generate performance report for the entire Python or gRPC-client process, insert</span>
<span class="sd"> the following function call before running inferences:</span>
<span class="sd"> `tfn.measure_performance()`</span>
<span class="sd"> Then latency/throughput report will be generated when the process terminates.</span>
<span class="sd"> Alternatively, it is possible to use `tfn.measure_performance` programmatically</span>
<span class="sd"> as a context manager. Performance measurement will be done for all inferences</span>
<span class="sd"> happening under this context. Report will be displayed as INFO level log when exiting</span>
<span class="sd"> the context. It is also possible to obtain a JSON format report in Python.</span>
<span class="sd"> For example:</span>
<span class="sd"> ```</span>
<span class="sd"> with tfn.measure_performance() as perf:</span>
<span class="sd"> ... (run some inferences) ...</span>
<span class="sd"> report_json = perf.report()</span>
<span class="sd"> report_full_json = perf.report(verbosity=1)</span>
<span class="sd"> ```</span>
<span class="sd"> """</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">func</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">window_size</span><span class="o">=</span><span class="mi">1</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">perf_tracker</span> <span class="o">=</span> <span class="n">PerformanceTracker</span><span class="p">(</span><span class="n">window_size</span><span class="p">)</span>
<span class="n">atexit</span><span class="o">.</span><span class="n">register</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">perf_tracker</span><span class="o">.</span><span class="n">report</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_original_run</span> <span class="o">=</span> <span class="n">session</span><span class="o">.</span><span class="n">Session</span><span class="o">.</span><span class="n">run</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_original_grpc_call</span> <span class="o">=</span> <span class="kc">None</span>
<span class="k">if</span> <span class="nb">callable</span><span class="p">(</span><span class="n">func</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">perf_tracker</span><span class="o">.</span><span class="n">register_func</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_track_performance</span><span class="p">(</span><span class="n">func</span><span class="p">))</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">session</span><span class="o">.</span><span class="n">Session</span><span class="o">.</span><span class="n">run</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_track_performance</span><span class="p">(</span><span class="n">session</span><span class="o">.</span><span class="n">Session</span><span class="o">.</span><span class="n">run</span><span class="p">)</span>
<span class="k">try</span><span class="p">:</span>
<span class="kn">import</span> <span class="nn">grpc</span>
<span class="kn">from</span> <span class="nn">tensorflow_serving.apis</span> <span class="kn">import</span> <span class="n">prediction_service_pb2_grpc</span>
<span class="n">dummy_stub</span> <span class="o">=</span> <span class="n">prediction_service_pb2_grpc</span><span class="o">.</span><span class="n">PredictionServiceStub</span><span class="p">(</span><span class="n">grpc</span><span class="o">.</span><span class="n">insecure_channel</span><span class="p">(</span><span class="s1">''</span><span class="p">))</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_grpc_callable_type</span> <span class="o">=</span> <span class="nb">type</span><span class="p">(</span><span class="n">dummy_stub</span><span class="o">.</span><span class="n">Predict</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_original_grpc_call</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_grpc_callable_type</span><span class="o">.</span><span class="fm">__call__</span>
<span class="k">except</span> <span class="ne">ImportError</span><span class="p">:</span>
<span class="k">pass</span>
<span class="k">if</span> <span class="nb">callable</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">_original_grpc_call</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_grpc_callable_type</span><span class="o">.</span><span class="fm">__call__</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_track_performance</span><span class="p">(</span>
<span class="n">grpc</span><span class="o">.</span><span class="n">_channel</span><span class="o">.</span><span class="n">_UnaryUnaryMultiCallable</span><span class="o">.</span><span class="fm">__call__</span>
<span class="p">)</span>
<span class="k">def</span> <span class="fm">__enter__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">perf_tracker</span>
<span class="k">def</span> <span class="fm">__exit__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">*</span><span class="n">exc</span><span class="p">):</span>
<span class="n">atexit</span><span class="o">.</span><span class="n">unregister</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">perf_tracker</span><span class="o">.</span><span class="n">report</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">perf_tracker</span><span class="o">.</span><span class="n">report</span><span class="p">()</span>
<span class="n">session</span><span class="o">.</span><span class="n">Session</span><span class="o">.</span><span class="n">run</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_original_run</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_original_grpc_call</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_grpc_callable_type</span><span class="o">.</span><span class="fm">__call__</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_original_grpc_call</span>
<span class="k">return</span> <span class="kc">False</span>
<span class="k">def</span> <span class="nf">_track_performance</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">func</span><span class="p">):</span>
<span class="nd">@wraps</span><span class="p">(</span><span class="n">func</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">wrapper</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">func</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="n">end</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">perf_tracker</span><span class="o">.</span><span class="n">add_timestamps</span><span class="p">(</span><span class="n">start</span><span class="p">,</span> <span class="n">end</span><span class="p">)</span>
<span class="k">return</span> <span class="n">result</span>
<span class="k">return</span> <span class="n">wrapper</span>
<span class="k">class</span> <span class="nc">PerformanceTracker</span><span class="p">(</span><span class="n">ContextDecorator</span><span class="p">):</span>
<span class="n">description</span> <span class="o">=</span> <span class="p">(</span>
<span class="s2">"Latency unit: second. Throughput unit: number of batched inferences per second. "</span>
<span class="s2">"Reported throughput is a lower bound of the actual throughput as inferences "</span>
<span class="s2">"spanning across window boundaries are not counted towards any of the windows. "</span>
<span class="s2">"'Quiet' periods (i. e., window buckets where the inference function is not called) "</span>
<span class="s2">"are not counted towards the reported average throughput."</span>
<span class="p">)</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">window_size</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">window_size</span> <span class="o">=</span> <span class="n">window_size</span>
<span class="bp">self</span><span class="o">.</span><span class="n">timestamps_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_func</span> <span class="o">=</span> <span class="kc">None</span>
<span class="k">def</span> <span class="fm">__call__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">_func</span><span class="p">(</span><span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">register_func</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">func</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_func</span> <span class="o">=</span> <span class="n">func</span>
<span class="k">def</span> <span class="nf">add_timestamps</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">start</span><span class="p">,</span> <span class="n">end</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">timestamps_list</span><span class="o">.</span><span class="n">append</span><span class="p">([</span><span class="n">start</span><span class="p">,</span> <span class="n">end</span><span class="p">])</span>
<span class="k">def</span> <span class="nf">report</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">verbosity</span><span class="o">=</span><span class="mi">0</span><span class="p">):</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">timestamps_list</span><span class="p">:</span>
<span class="n">latency_list</span> <span class="o">=</span> <span class="p">[</span><span class="n">end</span> <span class="o">-</span> <span class="n">start</span> <span class="k">for</span> <span class="n">start</span><span class="p">,</span> <span class="n">end</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">timestamps_list</span><span class="p">]</span>
<span class="n">latency_json</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">'p50'</span><span class="p">:</span> <span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span> <span class="mi">50</span><span class="p">),</span>
<span class="s1">'p90'</span><span class="p">:</span> <span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span> <span class="mi">90</span><span class="p">),</span>
<span class="s1">'p99'</span><span class="p">:</span> <span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span> <span class="mi">99</span><span class="p">),</span>
<span class="s1">'p100'</span><span class="p">:</span> <span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span> <span class="mi">100</span><span class="p">),</span>
<span class="p">}</span>
<span class="n">bucketed_timestamps</span> <span class="o">=</span> <span class="p">[</span><span class="bp">self</span><span class="o">.</span><span class="n">_get_bucket</span><span class="p">(</span><span class="n">start</span><span class="p">,</span> <span class="n">end</span><span class="p">)</span> <span class="k">for</span> <span class="n">start</span><span class="p">,</span> <span class="n">end</span> <span class="ow">in</span> <span class="bp">self</span><span class="o">.</span><span class="n">timestamps_list</span><span class="p">]</span>
<span class="n">counted_buckets</span> <span class="o">=</span> <span class="n">Counter</span><span class="p">(</span><span class="n">item</span> <span class="k">for</span> <span class="n">item</span> <span class="ow">in</span> <span class="n">bucketed_timestamps</span> <span class="k">if</span> <span class="n">item</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">)</span>
<span class="n">bucket_throughputs</span> <span class="o">=</span> <span class="p">[(</span><span class="n">key</span><span class="p">,</span> <span class="n">value</span> <span class="o">/</span> <span class="bp">self</span><span class="o">.</span><span class="n">window_size</span><span class="p">)</span> <span class="k">for</span> <span class="n">key</span><span class="p">,</span> <span class="n">value</span> <span class="ow">in</span> <span class="nb">sorted</span><span class="p">(</span><span class="n">counted_buckets</span><span class="o">.</span><span class="n">items</span><span class="p">())]</span>
<span class="n">busy_throughputs</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">OrderedDict</span><span class="p">((</span><span class="n">key</span><span class="p">,</span> <span class="n">value</span><span class="p">)</span> <span class="k">for</span> <span class="n">key</span><span class="p">,</span> <span class="n">value</span> <span class="ow">in</span> <span class="n">bucket_throughputs</span><span class="p">)</span><span class="o">.</span><span class="n">values</span><span class="p">())</span>
<span class="n">throughput_json</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">'peak'</span><span class="p">:</span> <span class="nb">max</span><span class="p">(</span><span class="n">busy_throughputs</span><span class="p">),</span>
<span class="s1">'median'</span><span class="p">:</span> <span class="n">percentile</span><span class="p">(</span><span class="n">busy_throughputs</span><span class="p">,</span> <span class="mi">50</span><span class="p">),</span>
<span class="s1">'average'</span><span class="p">:</span> <span class="nb">sum</span><span class="p">(</span><span class="n">busy_throughputs</span><span class="p">)</span> <span class="o">/</span> <span class="nb">len</span><span class="p">(</span><span class="n">busy_throughputs</span><span class="p">),</span>
<span class="p">}</span>
<span class="k">if</span> <span class="n">verbosity</span> <span class="o">></span> <span class="mi">0</span><span class="p">:</span>
<span class="n">throughput_json</span><span class="p">[</span><span class="s1">'trend'</span><span class="p">]</span> <span class="o">=</span> <span class="n">busy_throughputs</span>
<span class="n">report_json</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">'pid'</span><span class="p">:</span> <span class="n">os</span><span class="o">.</span><span class="n">getpid</span><span class="p">(),</span>
<span class="s1">'throughput'</span><span class="p">:</span> <span class="n">throughput_json</span><span class="p">,</span>
<span class="s1">'latency'</span><span class="p">:</span> <span class="n">latency_json</span><span class="p">,</span>
<span class="s1">'description'</span><span class="p">:</span> <span class="n">PerformanceTracker</span><span class="o">.</span><span class="n">description</span><span class="p">,</span>
<span class="p">}</span>
<span class="k">with</span> <span class="n">_logging_show_info</span><span class="p">():</span>
<span class="n">logging</span><span class="o">.</span><span class="n">info</span><span class="p">(</span><span class="s1">'performance report:</span><span class="se">\n</span><span class="si">{}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">json</span><span class="o">.</span><span class="n">dumps</span><span class="p">(</span><span class="n">report_json</span><span class="p">,</span> <span class="n">indent</span><span class="o">=</span><span class="mi">4</span><span class="p">)))</span>
<span class="k">return</span> <span class="n">report_json</span>
<span class="k">def</span> <span class="nf">_get_bucket</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">start</span><span class="p">,</span> <span class="n">end</span><span class="p">):</span>
<span class="n">bucketed_start</span> <span class="o">=</span> <span class="n">math</span><span class="o">.</span><span class="n">floor</span><span class="p">(</span><span class="n">start</span> <span class="o">/</span> <span class="bp">self</span><span class="o">.</span><span class="n">window_size</span><span class="p">)</span> <span class="o">*</span> <span class="bp">self</span><span class="o">.</span><span class="n">window_size</span>
<span class="n">bucketed_end</span> <span class="o">=</span> <span class="n">math</span><span class="o">.</span><span class="n">ceil</span><span class="p">(</span><span class="n">end</span> <span class="o">/</span> <span class="bp">self</span><span class="o">.</span><span class="n">window_size</span><span class="p">)</span> <span class="o">*</span> <span class="bp">self</span><span class="o">.</span><span class="n">window_size</span>
<span class="k">if</span> <span class="n">bucketed_end</span> <span class="o">-</span> <span class="n">bucketed_start</span> <span class="o">==</span> <span class="bp">self</span><span class="o">.</span><span class="n">window_size</span><span class="p">:</span>
<span class="k">return</span> <span class="n">bucketed_start</span>
<span class="k">else</span><span class="p">:</span>
<span class="k">return</span> <span class="kc">None</span>
<span class="k">def</span> <span class="nf">percentile</span><span class="p">(</span><span class="n">number_list</span><span class="p">,</span> <span class="n">percent</span><span class="p">):</span>
<span class="n">pos_float</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">number_list</span><span class="p">)</span> <span class="o">*</span> <span class="n">percent</span> <span class="o">/</span> <span class="mi">100</span>
<span class="n">max_pos</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">number_list</span><span class="p">)</span> <span class="o">-</span> <span class="mi">1</span>
<span class="n">pos_floor</span> <span class="o">=</span> <span class="nb">min</span><span class="p">(</span><span class="n">math</span><span class="o">.</span><span class="n">floor</span><span class="p">(</span><span class="n">pos_float</span><span class="p">),</span> <span class="n">max_pos</span><span class="p">)</span>
<span class="n">pos_ceil</span> <span class="o">=</span> <span class="nb">min</span><span class="p">(</span><span class="n">math</span><span class="o">.</span><span class="n">ceil</span><span class="p">(</span><span class="n">pos_float</span><span class="p">),</span> <span class="n">max_pos</span><span class="p">)</span>
<span class="n">number_list</span> <span class="o">=</span> <span class="nb">sorted</span><span class="p">(</span><span class="n">number_list</span><span class="p">)</span>
<span class="k">return</span> <span class="n">number_list</span><span class="p">[</span><span class="n">pos_ceil</span><span class="p">]</span> <span class="k">if</span> <span class="n">pos_float</span> <span class="o">-</span> <span class="n">pos_floor</span> <span class="o">></span> <span class="mf">0.5</span> <span class="k">else</span> <span class="n">number_list</span><span class="p">[</span><span class="n">pos_floor</span><span class="p">]</span>
<span class="nd">@contextmanager</span>
<span class="k">def</span> <span class="nf">_logging_show_info</span><span class="p">():</span>
<span class="k">try</span><span class="p">:</span>
<span class="n">verbosity</span> <span class="o">=</span> <span class="n">logging</span><span class="o">.</span><span class="n">get_verbosity</span><span class="p">()</span>
<span class="n">logging</span><span class="o">.</span><span class="n">set_verbosity</span><span class="p">(</span><span class="n">logging</span><span class="o">.</span><span class="n">INFO</span><span class="p">)</span>
<span class="k">yield</span>
<span class="k">finally</span><span class="p">:</span>
<span class="n">logging</span><span class="o">.</span><span class="n">set_verbosity</span><span class="p">(</span><span class="n">verbosity</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="sd">"""</span>
<span class="sd">Below are the inputs for compiled frozen graph</span>
<span class="sd">pb_path is a /path/graph_opt_neuron_656x368.pb</span>
<span class="sd">num_thread = 8 ( Number of threads that work on each tensorflow session )</span>
<span class="sd">batch_size =1 ( batch_size )</span>
<span class="sd">net_resolution ,default=656x368</span>
<span class="sd">num_inferences = 200</span>
<span class="sd">"""</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">from</span> <span class="nn">concurrent</span> <span class="kn">import</span> <span class="n">futures</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span>
<span class="k">def</span> <span class="nf">run_with_dummy</span><span class="p">(</span><span class="n">sess</span><span class="p">,</span> <span class="n">dummy_feed_dict</span><span class="p">,</span> <span class="n">num_inferences</span><span class="p">):</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_inferences</span><span class="p">):</span>
<span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="s1">'Openpose/concat_stage7:0'</span><span class="p">,</span> <span class="n">dummy_feed_dict</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">main</span><span class="p">():</span>
<span class="n">NUM_NEURON_CORES</span> <span class="o">=</span> <span class="mi">16</span>
<span class="n">pb_path</span> <span class="o">=</span> <span class="s1">'./graph_opt_neuron_656x368.pb'</span>
<span class="n">num_thread</span> <span class="o">=</span> <span class="mi">8</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">net_resolution</span> <span class="o">=</span> <span class="s1">'656x368'</span>
<span class="n">num_inferences</span> <span class="o">=</span> <span class="mi">200</span>
<span class="n">dim_w</span><span class="p">,</span> <span class="n">dim_h</span> <span class="o">=</span> <span class="n">net_resolution</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">'x'</span><span class="p">)</span>
<span class="n">dim_w</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">dim_w</span><span class="p">)</span>
<span class="n">dim_h</span> <span class="o">=</span> <span class="nb">int</span><span class="p">(</span><span class="n">dim_h</span><span class="p">)</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">GraphDef</span><span class="p">()</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">pb_path</span><span class="p">,</span> <span class="s1">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">graph_def</span><span class="o">.</span><span class="n">ParseFromString</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">())</span>
<span class="n">graph_def</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">graph_util</span><span class="o">.</span><span class="n">tag_multicore</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">NUM_NEURON_CORES</span><span class="p">)</span>
<span class="k">with</span> <span class="n">tfn</span><span class="o">.</span><span class="n">measure_performance</span><span class="p">()</span> <span class="k">as</span> <span class="n">perf</span><span class="p">:</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">())</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">tf</span><span class="o">.</span><span class="n">import_graph_def</span><span class="p">(</span><span class="n">graph_def</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">''</span><span class="p">)</span>
<span class="n">input_name</span> <span class="o">=</span> <span class="s1">'image:0'</span>
<span class="n">input_shape</span> <span class="o">=</span> <span class="n">sess</span><span class="o">.</span><span class="n">graph</span><span class="o">.</span><span class="n">get_tensor_by_name</span><span class="p">(</span><span class="n">input_name</span><span class="p">)</span><span class="o">.</span><span class="n">shape</span><span class="o">.</span><span class="n">as_list</span><span class="p">()</span>
<span class="n">input_shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="o">=</span> <span class="n">batch_size</span>
<span class="n">input_shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="o">=</span> <span class="n">dim_h</span>
<span class="n">input_shape</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">=</span> <span class="n">dim_w</span>
<span class="n">dummy_feed_dict</span> <span class="o">=</span> <span class="p">{</span><span class="n">input_name</span><span class="p">:</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="n">input_shape</span><span class="p">)</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">float32</span><span class="p">)}</span>
<span class="k">with</span> <span class="n">futures</span><span class="o">.</span><span class="n">ThreadPoolExecutor</span><span class="p">(</span><span class="n">max_workers</span><span class="o">=</span><span class="n">num_thread</span><span class="p">)</span> <span class="k">as</span> <span class="n">executor</span><span class="p">:</span>
<span class="n">fut_list</span> <span class="o">=</span> <span class="p">[</span><span class="n">executor</span><span class="o">.</span><span class="n">submit</span><span class="p">(</span><span class="n">run_with_dummy</span><span class="p">,</span> <span class="n">sess</span><span class="p">,</span> <span class="n">dummy_feed_dict</span><span class="p">,</span> <span class="n">num_inferences</span><span class="p">)</span> <span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_thread</span><span class="p">)]</span>
<span class="n">res_list</span> <span class="o">=</span> <span class="p">[</span><span class="n">fut</span><span class="o">.</span><span class="n">result</span><span class="p">()</span> <span class="k">for</span> <span class="n">fut</span> <span class="ow">in</span> <span class="n">fut_list</span><span class="p">]</span>
<span class="n">main</span><span class="p">()</span>
<span class="c1"># Sample output will look like below:</span>
<span class="c1"># INFO:tensorflow:performance report:</span>
<span class="c1"># {</span>
<span class="c1"># "pid": 17713,</span>
<span class="c1"># "throughput": {</span>
<span class="c1"># "peak": 66.0,</span>
<span class="c1"># "median": 64.0,</span>
<span class="c1"># "average": 61.56521739130435</span>
<span class="c1"># },</span>
<span class="c1"># "latency": {</span>
<span class="c1"># "p50": 0.1106414794921875,</span>
<span class="c1"># "p90": 0.11212301254272461,</span>
<span class="c1"># "p99": 0.11337876319885254,</span>
<span class="c1"># "p100": 7.08282732963562</span>
<span class="c1"># },</span>
<span class="c1"># "description": "Latency unit: second. Throughput unit: number of batched inferences per second. Reported throughput is a lower bound of the actual throughput as inferences spanning across window boundaries are not counted towards any of the windows. 'Quiet' periods (i. e., window buckets where the inference function is not called) are not counted towards the reported average throughput."</span>
<span class="c1"># }</span>
</pre></div>
</div>
</div>
<!-- empty raw cell --></div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Computer Vision Tutorials (<code class="docutils literal notranslate"><span class="pre">tensorflow-neuron</span></code>)</p>
</div>
</a>
<a class="right-next" id="next-link" href="../tensorflow_resnet50/resnet50.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Running ResNet50 on Inferentia</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:51.332Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.rst.txt
|
```
Utilizing Neuron Capabilities Tutorials (``tensorflow-neuron``)
===============================================================
* Tensorflow 1.x - Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving :ref:`[html] <tensorflow-serving-neuronrt-visible-cores>`
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/tutorial-tensorflow-serving-NeuronRT-Visible-Cores
.. note::
To use Jupyter Notebook see:
* :ref:`setup-jupyter-notebook-steps-troubleshooting`
* :ref:`running-jupyter-notebook-as-script`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Utilizing Neuron Capabilities Tutorials (``tensorflow-neuron``)
===============================================================
* Tensorflow 1.x - Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving :ref:`[html] <tensorflow-serving-neuronrt-visible-cores>`
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/tutorial-tensorflow-serving-NeuronRT-Visible-Cores
.. note::
To use Jupyter Notebook see:
* :ref:`setup-jupyter-notebook-steps-troubleshooting`
* :ref:`running-jupyter-notebook-as-script` </pre></body></html>
|
2023-09-29T20:54:51.370Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.rst.txt
|
```
Natural Language Processing (NLP) Tutorials (``tensorflow-neuron``)
===================================================================
* Tensorflow 1.x - Running TensorFlow BERT-Large with AWS Neuron :ref:`[html] <tensorflow-bert-demo>`
* Tensorflow 2.x - HuggingFace DistilBERT with Tensorflow2 Neuron :ref:`[html] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` :github:`[notebook] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>`
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo
/src/examples/tensorflow/huggingface_bert/huggingface_bert
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Natural Language Processing (NLP) Tutorials (``tensorflow-neuron``)
===================================================================
* Tensorflow 1.x - Running TensorFlow BERT-Large with AWS Neuron :ref:`[html] <tensorflow-bert-demo>`
* Tensorflow 2.x - HuggingFace DistilBERT with Tensorflow2 Neuron :ref:`[html] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>` :github:`[notebook] </src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb>`
.. toctree::
:hidden:
/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo
/src/examples/tensorflow/huggingface_bert/huggingface_bert
</pre></body></html>
|
2023-09-29T20:54:51.383Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/additional-examples.rst.txt
|
```
Additional Examples (``tensorflow-neuron``)
===========================================
.. toctree::
:maxdepth: 1
:hidden:
AWS Neuron Samples GitHub Repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference>
.. include:: /frameworks/tensorflow/tensorflow-neuron/additional-examples.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Additional Examples (``tensorflow-neuron``)
===========================================
.. toctree::
:maxdepth: 1
:hidden:
AWS Neuron Samples GitHub Repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference>
.. include:: /frameworks/tensorflow/tensorflow-neuron/additional-examples.txt</pre></body></html>
|
2023-09-29T20:54:51.396Z
|
|
Working with YOLO v4 using AWS Neuron SDK — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.html#tensorflow-yolo4
|
# Working with YOLO v4 using AWS Neuron SDK — AWS Neuron Documentation
The [Evaluate YOLO v4 on Inferentia](../../../../../src/examples/tensorflow/yolo_v4_demo/evaluate.html) notebook contains an example on how to take an open source YOLO v4 models, and run it on AWS Inferentia.
## Optimizing image pre-processing and post-processing for object detection models[#](#optimizing-image-pre-processing-and-post-processing-for-object-detection-models "Permalink to this headline")
End-to-end object detection pipelines usually contain image pre-post-processing operators that cannot run efficiently on Inferentia. DecodeJPEG and NonMaxSuppression are typical examples. In practice, we may simply place these operators on CPU using the AWS Neuron machine learning framework integration. However, Inferentia is such a high performance machine learning accelerator that, once the model successfully compiles and runs, these simple pre-post-processing operators can become the new performance bottleneck! In this tutorial, we explain some commonly used tensorflow techniques for optimizing the performance of these pre-post-processing operators so that we can fully unleash the potential of Inferentia.
1. Write JPEG decoding and image shifting/scaling as tensorflow operators.
In `yolo_v4_coco_saved_model.py`, you may find the following code snippet.
```
import tensorflow as tf
...
def YOLOv4(...
...
x, image_shape = layers.Lambda(lambda t: preprocessor(t, input_shape))(inputs)
# cspdarknet53
x = conv2d_unit(x, i32, 3, strides=1, padding='same')
...
def decode_jpeg_resize(input_tensor, image_size):
tensor = tf.image.decode_png(input_tensor, channels=3)
shape = tf.shape(tensor)
tensor = tf.cast(tensor, tf.float32)
tensor = tf.image.resize(tensor, image_size)
tensor /= 255.0
return tf.cast(tensor, tf.float16), shape
def preprocessor(input_tensor, image_size):
with tf.name_scope('Preprocessor'):
tensor = tf.map_fn(
partial(decode_jpeg_resize, image_size=image_size), input_tensor,
dtype=(tf.float16, tf.int32), back_prop=False, parallel_iterations=16)
return tensor
```
Comparing with the implementation in [the original repo](https://github.com/miemie2013/Keras-YOLOv4/blob/f0a6b379a362dc3f2d1ef5bd0e58933ed6490ff3/model/yolov4.py), our difference is the use of `tf.image.decode_png` and `tf.image.resize`, along with a small number of scaling/casting operators. After this modification, the generated tensorflow SavedModel now takes JPEG image raw bytes as input, instead of a float32 array representing the image. When the image resolution is 608x608, this technique effectively reduces the input image size from 4.4 MB to the size of a typical JPEG image, which can be as little as hundreds of KB. When the tensorflow SavedModel is deployed through [tensorflow/serving](https://github.com/tensorflow/serving), this technique can very effectively reduce the gRPC transfer overhead of input images.
2. Replace non-max suppression (NMS) operations by `tf.image.combined_non_max_suppression`.
Another difference of our implementation is the treatment of non-max suppression, a commmonly used operation for removing redundant bounding boxes that overlap with other boxes. In an object detection scenario represented by the COCO dataset where the number of output classes is large, the hand-fused `` `tf.image.combined_non_max_suppression`` <[https://www.tensorflow.org/versions/r1.15/api\_docs/python/tf/image/combined\_non\_max\_suppression](https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/combined_non_max_suppression)\>\`\_\_ operator can parallelize multi-class NMS on CPU in a very efficient manner. With proper use of this operator, the bounding box post-processing step has a less chance of becoming the performance bottleneck in the end-to-end object detection pipeline.
The following sample code (from `yolo_v4_coco_saved_model.py`) demonstrates our method of writing the bounding box post-processing step using efficient tensorflow operations.
```
...
def filter_boxes(outputs):
boxes_l, boxes_m, boxes_s, box_scores_l, box_scores_m, box_scores_s, image_shape = outputs
boxes_l, box_scores_l = filter_boxes_one_size(boxes_l, box_scores_l)
boxes_m, box_scores_m = filter_boxes_one_size(boxes_m, box_scores_m)
boxes_s, box_scores_s = filter_boxes_one_size(boxes_s, box_scores_s)
boxes = tf.concat([boxes_l, boxes_m, boxes_s], axis=0)
box_scores = tf.concat([box_scores_l, box_scores_m, box_scores_s], axis=0)
image_shape_wh = image_shape[1::-1]
image_shape_whwh = tf.concat([image_shape_wh, image_shape_wh], axis=-1)
image_shape_whwh = tf.cast(image_shape_whwh, tf.float32)
boxes *= image_shape_whwh
boxes = tf.expand_dims(boxes, 0)
box_scores = tf.expand_dims(box_scores, 0)
boxes = tf.expand_dims(boxes, 2)
nms_boxes, nms_scores, nms_classes, valid_detections = tf.image.combined_non_max_suppression(
boxes,
box_scores,
max_output_size_per_class=nms_top_k,
max_total_size=nms_top_k,
iou_threshold=nms_thresh,
score_threshold=conf_thresh,
pad_per_class=False,
clip_boxes=False,
name='CombinedNonMaxSuppression',
)
return nms_boxes[0], nms_scores[0], nms_classes[0]
def filter_boxes_one_size(boxes, box_scores):
box_class_scores = tf.reduce_max(box_scores, axis=-1)
keep = box_class_scores > conf_thresh
boxes = boxes[keep]
box_scores = box_scores[keep]
return boxes, box_scores
def batch_yolo_out(outputs):
with tf.name_scope('yolo_out'):
b_output_lr, b_output_mr, b_output_sr, b_image_shape = outputs
with tf.name_scope('process_feats'):
b_boxes_l, b_box_scores_l = batch_process_feats(b_output_lr, anchors, masks[0])
with tf.name_scope('process_feats'):
b_boxes_m, b_box_scores_m = batch_process_feats(b_output_mr, anchors, masks[1])
with tf.name_scope('process_feats'):
b_boxes_s, b_box_scores_s = batch_process_feats(b_output_sr, anchors, masks[2])
with tf.name_scope('filter_boxes'):
b_nms_boxes, b_nms_scores, b_nms_classes = tf.map_fn(
filter_boxes, [b_boxes_l, b_boxes_m, b_boxes_s, b_box_scores_l, b_box_scores_m, b_box_scores_s, b_image_shape],
dtype=(tf.float32, tf.float32, tf.float32), back_prop=False, parallel_iterations=16)
return b_nms_boxes, b_nms_scores, b_nms_classes
boxes_scores_classes = layers.Lambda(batch_yolo_out)([output_lr, output_mr, output_sr, image_shape])
...
```
For other advanced data input/output pipeline optimization techniques, please refer to [https://www.tensorflow.org/guide/data#preprocessing\_data](https://www.tensorflow.org/guide/data#preprocessing_data).
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Working with YOLO v4 using AWS Neuron SDK — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../../" id="documentation_options" src="../../../../../_static/documentation_options.js"></script>
<script src="../../../../../_static/jquery.js"></script>
<script src="../../../../../_static/underscore.js"></script>
<script src="../../../../../_static/doctools.js"></script>
<script src="../../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../../_static/contentui.js"></script>
<script src="../../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../../genindex.html">
<link rel="search" title="Search" href="../../../../../search.html">
<link rel="next" title="Evaluate YOLO v3 on Inferentia" href="../../../../../src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">
<link rel="prev" title="Running ResNet50 on Inferentia" href="../../../../../src/examples/tensorflow/tensorflow_resnet50/resnet50.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4 current active">
<a class="reference internal" href="../tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../../_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#optimizing-image-pre-processing-and-post-processing-for-object-detection-models">
Optimizing image pre-processing and post-processing for object detection models
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Working with YOLO v4 using AWS Neuron SDK</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#optimizing-image-pre-processing-and-post-processing-for-object-detection-models">
Optimizing image pre-processing and post-processing for object detection models
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="working-with-yolo-v4-using-aws-neuron-sdk">
<span id="tensorflow-yolo4"></span><h1>Working with YOLO v4 using AWS Neuron SDK<a class="headerlink" href="#working-with-yolo-v4-using-aws-neuron-sdk" title="Permalink to this headline">#</a></h1>
<p>The <a class="reference internal" href="../../../../../src/examples/tensorflow/yolo_v4_demo/evaluate.html"><span class="std std-ref">Evaluate YOLO v4 on Inferentia</span></a> notebook contains an example on how to take an open
source YOLO v4 models, and run it on AWS Inferentia.</p>
<div class="section" id="optimizing-image-pre-processing-and-post-processing-for-object-detection-models">
<h2>Optimizing image pre-processing and post-processing for object detection models<a class="headerlink" href="#optimizing-image-pre-processing-and-post-processing-for-object-detection-models" title="Permalink to this headline">#</a></h2>
<p>End-to-end object detection pipelines usually contain image
pre-post-processing operators that cannot run efficiently on Inferentia.
DecodeJPEG and NonMaxSuppression are typical examples. In practice, we
may simply place these operators on CPU using the AWS Neuron machine
learning framework integration. However, Inferentia is such a high
performance machine learning accelerator that, once the model
successfully compiles and runs, these simple pre-post-processing
operators can become the new performance bottleneck! In this tutorial,
we explain some commonly used tensorflow techniques for optimizing the
performance of these pre-post-processing operators so that we can fully
unleash the potential of Inferentia.</p>
<ol class="arabic simple">
<li><p>Write JPEG decoding and image shifting/scaling as tensorflow
operators.</p></li>
</ol>
<p>In <code class="docutils literal notranslate"><span class="pre">yolo_v4_coco_saved_model.py</span></code>, you may find the following code
snippet.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="o">...</span>
<span class="k">def</span> <span class="nf">YOLOv4</span><span class="p">(</span><span class="o">...</span>
<span class="o">...</span>
<span class="n">x</span><span class="p">,</span> <span class="n">image_shape</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Lambda</span><span class="p">(</span><span class="k">lambda</span> <span class="n">t</span><span class="p">:</span> <span class="n">preprocessor</span><span class="p">(</span><span class="n">t</span><span class="p">,</span> <span class="n">input_shape</span><span class="p">))(</span><span class="n">inputs</span><span class="p">)</span>
<span class="c1"># cspdarknet53</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">conv2d_unit</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">i32</span><span class="p">,</span> <span class="mi">3</span><span class="p">,</span> <span class="n">strides</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s1">'same'</span><span class="p">)</span>
<span class="o">...</span>
<span class="k">def</span> <span class="nf">decode_jpeg_resize</span><span class="p">(</span><span class="n">input_tensor</span><span class="p">,</span> <span class="n">image_size</span><span class="p">):</span>
<span class="n">tensor</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">decode_png</span><span class="p">(</span><span class="n">input_tensor</span><span class="p">,</span> <span class="n">channels</span><span class="o">=</span><span class="mi">3</span><span class="p">)</span>
<span class="n">shape</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">shape</span><span class="p">(</span><span class="n">tensor</span><span class="p">)</span>
<span class="n">tensor</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">tensor</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
<span class="n">tensor</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">resize</span><span class="p">(</span><span class="n">tensor</span><span class="p">,</span> <span class="n">image_size</span><span class="p">)</span>
<span class="n">tensor</span> <span class="o">/=</span> <span class="mf">255.0</span>
<span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">tensor</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float16</span><span class="p">),</span> <span class="n">shape</span>
<span class="k">def</span> <span class="nf">preprocessor</span><span class="p">(</span><span class="n">input_tensor</span><span class="p">,</span> <span class="n">image_size</span><span class="p">):</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s1">'Preprocessor'</span><span class="p">):</span>
<span class="n">tensor</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">map_fn</span><span class="p">(</span>
<span class="n">partial</span><span class="p">(</span><span class="n">decode_jpeg_resize</span><span class="p">,</span> <span class="n">image_size</span><span class="o">=</span><span class="n">image_size</span><span class="p">),</span> <span class="n">input_tensor</span><span class="p">,</span>
<span class="n">dtype</span><span class="o">=</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">float16</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="p">),</span> <span class="n">back_prop</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">parallel_iterations</span><span class="o">=</span><span class="mi">16</span><span class="p">)</span>
<span class="k">return</span> <span class="n">tensor</span>
</pre></div>
</div>
<p>Comparing with the implementation in <a class="reference external" href="https://github.com/miemie2013/Keras-YOLOv4/blob/f0a6b379a362dc3f2d1ef5bd0e58933ed6490ff3/model/yolov4.py">the original
repo</a>,
our difference is the use of <code class="docutils literal notranslate"><span class="pre">tf.image.decode_png</span></code> and
<code class="docutils literal notranslate"><span class="pre">tf.image.resize</span></code>, along with a small number of scaling/casting
operators. After this modification, the generated tensorflow SavedModel
now takes JPEG image raw bytes as input, instead of a float32 array
representing the image. When the image resolution is 608x608, this
technique effectively reduces the input image size from 4.4 MB to the
size of a typical JPEG image, which can be as little as hundreds of KB.
When the tensorflow SavedModel is deployed through
<a class="reference external" href="https://github.com/tensorflow/serving">tensorflow/serving</a>, this
technique can very effectively reduce the gRPC transfer overhead of
input images.</p>
<ol class="arabic simple" start="2">
<li><p>Replace non-max suppression (NMS) operations by
<code class="docutils literal notranslate"><span class="pre">tf.image.combined_non_max_suppression</span></code>.</p></li>
</ol>
<p>Another difference of our implementation is the treatment of non-max
suppression, a commmonly used operation for removing redundant bounding
boxes that overlap with other boxes. In an object detection scenario
represented by the COCO dataset where the number of output classes is
large, the hand-fused <code class="docutils literal notranslate"><span class="pre">`tf.image.combined_non_max_suppression</span></code>
<<a class="reference external" href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/combined_non_max_suppression">https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/image/combined_non_max_suppression</a>>`__
operator can parallelize multi-class NMS on CPU in a very efficient
manner. With proper use of this operator, the bounding box
post-processing step has a less chance of becoming the performance
bottleneck in the end-to-end object detection pipeline.</p>
<p>The following sample code (from <code class="docutils literal notranslate"><span class="pre">yolo_v4_coco_saved_model.py</span></code>)
demonstrates our method of writing the bounding box post-processing step
using efficient tensorflow operations.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="o">...</span>
<span class="k">def</span> <span class="nf">filter_boxes</span><span class="p">(</span><span class="n">outputs</span><span class="p">):</span>
<span class="n">boxes_l</span><span class="p">,</span> <span class="n">boxes_m</span><span class="p">,</span> <span class="n">boxes_s</span><span class="p">,</span> <span class="n">box_scores_l</span><span class="p">,</span> <span class="n">box_scores_m</span><span class="p">,</span> <span class="n">box_scores_s</span><span class="p">,</span> <span class="n">image_shape</span> <span class="o">=</span> <span class="n">outputs</span>
<span class="n">boxes_l</span><span class="p">,</span> <span class="n">box_scores_l</span> <span class="o">=</span> <span class="n">filter_boxes_one_size</span><span class="p">(</span><span class="n">boxes_l</span><span class="p">,</span> <span class="n">box_scores_l</span><span class="p">)</span>
<span class="n">boxes_m</span><span class="p">,</span> <span class="n">box_scores_m</span> <span class="o">=</span> <span class="n">filter_boxes_one_size</span><span class="p">(</span><span class="n">boxes_m</span><span class="p">,</span> <span class="n">box_scores_m</span><span class="p">)</span>
<span class="n">boxes_s</span><span class="p">,</span> <span class="n">box_scores_s</span> <span class="o">=</span> <span class="n">filter_boxes_one_size</span><span class="p">(</span><span class="n">boxes_s</span><span class="p">,</span> <span class="n">box_scores_s</span><span class="p">)</span>
<span class="n">boxes</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span><span class="n">boxes_l</span><span class="p">,</span> <span class="n">boxes_m</span><span class="p">,</span> <span class="n">boxes_s</span><span class="p">],</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
<span class="n">box_scores</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span><span class="n">box_scores_l</span><span class="p">,</span> <span class="n">box_scores_m</span><span class="p">,</span> <span class="n">box_scores_s</span><span class="p">],</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
<span class="n">image_shape_wh</span> <span class="o">=</span> <span class="n">image_shape</span><span class="p">[</span><span class="mi">1</span><span class="p">::</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<span class="n">image_shape_whwh</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span><span class="n">image_shape_wh</span><span class="p">,</span> <span class="n">image_shape_wh</span><span class="p">],</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>
<span class="n">image_shape_whwh</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">image_shape_whwh</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
<span class="n">boxes</span> <span class="o">*=</span> <span class="n">image_shape_whwh</span>
<span class="n">boxes</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">boxes</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">box_scores</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">box_scores</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">boxes</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">boxes</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
<span class="n">nms_boxes</span><span class="p">,</span> <span class="n">nms_scores</span><span class="p">,</span> <span class="n">nms_classes</span><span class="p">,</span> <span class="n">valid_detections</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">combined_non_max_suppression</span><span class="p">(</span>
<span class="n">boxes</span><span class="p">,</span>
<span class="n">box_scores</span><span class="p">,</span>
<span class="n">max_output_size_per_class</span><span class="o">=</span><span class="n">nms_top_k</span><span class="p">,</span>
<span class="n">max_total_size</span><span class="o">=</span><span class="n">nms_top_k</span><span class="p">,</span>
<span class="n">iou_threshold</span><span class="o">=</span><span class="n">nms_thresh</span><span class="p">,</span>
<span class="n">score_threshold</span><span class="o">=</span><span class="n">conf_thresh</span><span class="p">,</span>
<span class="n">pad_per_class</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="n">clip_boxes</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="n">name</span><span class="o">=</span><span class="s1">'CombinedNonMaxSuppression'</span><span class="p">,</span>
<span class="p">)</span>
<span class="k">return</span> <span class="n">nms_boxes</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">nms_scores</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">nms_classes</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
<span class="k">def</span> <span class="nf">filter_boxes_one_size</span><span class="p">(</span><span class="n">boxes</span><span class="p">,</span> <span class="n">box_scores</span><span class="p">):</span>
<span class="n">box_class_scores</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_max</span><span class="p">(</span><span class="n">box_scores</span><span class="p">,</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>
<span class="n">keep</span> <span class="o">=</span> <span class="n">box_class_scores</span> <span class="o">></span> <span class="n">conf_thresh</span>
<span class="n">boxes</span> <span class="o">=</span> <span class="n">boxes</span><span class="p">[</span><span class="n">keep</span><span class="p">]</span>
<span class="n">box_scores</span> <span class="o">=</span> <span class="n">box_scores</span><span class="p">[</span><span class="n">keep</span><span class="p">]</span>
<span class="k">return</span> <span class="n">boxes</span><span class="p">,</span> <span class="n">box_scores</span>
<span class="k">def</span> <span class="nf">batch_yolo_out</span><span class="p">(</span><span class="n">outputs</span><span class="p">):</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s1">'yolo_out'</span><span class="p">):</span>
<span class="n">b_output_lr</span><span class="p">,</span> <span class="n">b_output_mr</span><span class="p">,</span> <span class="n">b_output_sr</span><span class="p">,</span> <span class="n">b_image_shape</span> <span class="o">=</span> <span class="n">outputs</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s1">'process_feats'</span><span class="p">):</span>
<span class="n">b_boxes_l</span><span class="p">,</span> <span class="n">b_box_scores_l</span> <span class="o">=</span> <span class="n">batch_process_feats</span><span class="p">(</span><span class="n">b_output_lr</span><span class="p">,</span> <span class="n">anchors</span><span class="p">,</span> <span class="n">masks</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s1">'process_feats'</span><span class="p">):</span>
<span class="n">b_boxes_m</span><span class="p">,</span> <span class="n">b_box_scores_m</span> <span class="o">=</span> <span class="n">batch_process_feats</span><span class="p">(</span><span class="n">b_output_mr</span><span class="p">,</span> <span class="n">anchors</span><span class="p">,</span> <span class="n">masks</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s1">'process_feats'</span><span class="p">):</span>
<span class="n">b_boxes_s</span><span class="p">,</span> <span class="n">b_box_scores_s</span> <span class="o">=</span> <span class="n">batch_process_feats</span><span class="p">(</span><span class="n">b_output_sr</span><span class="p">,</span> <span class="n">anchors</span><span class="p">,</span> <span class="n">masks</span><span class="p">[</span><span class="mi">2</span><span class="p">])</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">name_scope</span><span class="p">(</span><span class="s1">'filter_boxes'</span><span class="p">):</span>
<span class="n">b_nms_boxes</span><span class="p">,</span> <span class="n">b_nms_scores</span><span class="p">,</span> <span class="n">b_nms_classes</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">map_fn</span><span class="p">(</span>
<span class="n">filter_boxes</span><span class="p">,</span> <span class="p">[</span><span class="n">b_boxes_l</span><span class="p">,</span> <span class="n">b_boxes_m</span><span class="p">,</span> <span class="n">b_boxes_s</span><span class="p">,</span> <span class="n">b_box_scores_l</span><span class="p">,</span> <span class="n">b_box_scores_m</span><span class="p">,</span> <span class="n">b_box_scores_s</span><span class="p">,</span> <span class="n">b_image_shape</span><span class="p">],</span>
<span class="n">dtype</span><span class="o">=</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">),</span> <span class="n">back_prop</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">parallel_iterations</span><span class="o">=</span><span class="mi">16</span><span class="p">)</span>
<span class="k">return</span> <span class="n">b_nms_boxes</span><span class="p">,</span> <span class="n">b_nms_scores</span><span class="p">,</span> <span class="n">b_nms_classes</span>
<span class="n">boxes_scores_classes</span> <span class="o">=</span> <span class="n">layers</span><span class="o">.</span><span class="n">Lambda</span><span class="p">(</span><span class="n">batch_yolo_out</span><span class="p">)([</span><span class="n">output_lr</span><span class="p">,</span> <span class="n">output_mr</span><span class="p">,</span> <span class="n">output_sr</span><span class="p">,</span> <span class="n">image_shape</span><span class="p">])</span>
<span class="o">...</span>
</pre></div>
</div>
<p>For other advanced data input/output pipeline optimization techniques,
please refer to
<a class="reference external" href="https://www.tensorflow.org/guide/data#preprocessing_data">https://www.tensorflow.org/guide/data#preprocessing_data</a>.</p>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../../src/examples/tensorflow/tensorflow_resnet50/resnet50.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Running ResNet50 on Inferentia</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../../src/examples/tensorflow/yolo_v3_demo/yolo_v3.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Evaluate YOLO v3 on Inferentia</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:51.707Z
|
Neuron Collective Communication — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/arch/neuron-features/collective-communication.html
|
# Neuron Collective Communication — AWS Neuron Documentation
## Contents
- [Introduction](#introduction)
- [trn1.32xlarge topology](#trn1-32xlarge-topology)
- [trn1.2xlarge topology](#trn1-2xlarge-topology)
- [inf2.48xlarge topology](#inf2-48xlarge-topology)
- [Inf2 other instance sizes topologies](#inf2-other-instance-sizes-topologies)
_This document is relevant for_: `Inf2`, `Trn1`, `Trn1n`
## Neuron Collective Communication[#](#neuron-collective-communication "Permalink to this headline")
Table of contents
- [Introduction](#introduction)
- [trn1.32xlarge topology](#trn1-32xlarge-topology)
- [trn1.2xlarge topology](#trn1-2xlarge-topology)
- [inf2.48xlarge topology](#inf2-48xlarge-topology)
- [Inf2 other instance sizes topologies](#inf2-other-instance-sizes-topologies)
## [Introduction](#id1)[#](#introduction "Permalink to this headline")
Collective Communications is an integral component of distributed ML training. Multiple training nodes exchange information during ML training via Collective Communication operators such as all-reduce. Neuron provides hardware support for the execution of Collective Communication with the Neuron SDK responsible for the hardware configuration and for the execution orchestration. Neuron provides the following Collective Communication operators:
- all-reduce
- all-gather
- reduce-scatter
Neuron also provides the following peer to peer operators:
- send
- receive
Support for additional Collective Communication operators might be added in future releases. Neuron devices are connected via NeuronLinks within a single instance and EFA links between instances. All NeuronLinks transfer the data directly between Neuron device and between Neuron devices and EFA devices bypassing the host to achieve high bandwidth and low latency.
Collective Communication support on Neuron requires installation of 3 separate packages:
- `aws-neuronx-runtime-lib` - supports execution on Neuron, not specific to Collective Communication and is always required
- `aws-neuronx-collectives` - supports Collective Communication execution on a single instance and on multiple instances.
- `efa_installer` - low level libraries and drivers to support Collective Communication execution over EFA, required to support Collective Communication on multiple instances.
ML models need to be compiled by the Neuron compiler before they can be executed on Neuron devices. The result of the compilation is a binary object containing computational instruction and data movement instructions. Any Collective Communication operators encountered during compilation are converted to the place holder instructions to be filled by the runtime/collectives libraries during load and execution. This approach allows Neuron compiler to be unaware of the specific physical topology connecting Neuron devices. Once a compiled mode is placed on Neuron devices the runtime/collectives libraries generate the appropriate data movement instructions based on the placement. For example, a different set of instructions is generated when the next rank is connected via NeuronLinks or via EFA. Neuron executes Collective Communication operators using dedicated hardware that is not shared with computational resources. That allows Neuron to execute compute and communication in parallel. For example Neuron can all-reduce gradients of one layer while the gradients for another layer are computed. Overlapping compute and communication can result is lower latency and higher performance.
## [trn1.32xlarge topology](#id2)[#](#trn1-32xlarge-topology "Permalink to this headline")

**Trn1.32xl 2D torus topology**
On a single trn1.32xlarge instance Neuron devices are connected in a 2D torus topology supporting Collective Communication operators in sets of 2, 8 and 32 ranks. Other set sizes might be supported in future releases. A single instance topology can be further extended across multiple instances using EFA NeuronLinks.
For example an 8x4 topology on a single instance, such as 8 rank tensor parallel and 4 ranks data parallel can be extended across multiple instances creating a large tensor/data parallel training cluster.
## [trn1.2xlarge topology](#id3)[#](#trn1-2xlarge-topology "Permalink to this headline")
Trn1.2xlarge instance type contains a single Neuron device with two NeuronCores. This instance type supports only 2 ranks Collective Communication operators. EFA is not available on trn1.2xlarge and the ranks cannot be extended beyond a single instance.
## [inf2.48xlarge topology](#id4)[#](#inf2-48xlarge-topology "Permalink to this headline")

**inf2.48xlarge topology**
On inf2.48xlarge instance Neuron devices are connected in a ring via NeuronLink. Any **even** number of ranks for Collective Communication operators is supported provided that the ranks occupy consecutive Neuron devices. However, when using any number of ranks other than 24 (full instance) full performance of the ring is not utilized.
## [Inf2 other instance sizes topologies](#id5)[#](#inf2-other-instance-sizes-topologies "Permalink to this headline")

**inf2 other instance sizes topologies**
On other inf2 instance sizes Neuron devices are connected bi-directionally. Any **even** number of ranks for Collective Communication operators is supported provided that the ranks occupy consecutive Neuron devices. Collective Communication performance is similar to the performance on inf2.48xlarge when fewer than 24 ranks are used.
_This document is relevant for_: `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Neuron Collective Communication — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<link rel="next" title="Neuron Control Flow" href="control-flow.html">
<link rel="prev" title="NeuronCore Pipeline" href="neuroncore-pipeline.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/arch/neuron-features/collective-communication", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="index.html">
Features
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2 current active">
<a class="current reference internal" href="#">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-features/collective-communication.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/arch/neuron-features/collective-communication.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/general/arch/neuron-features/collective-communication.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/general/arch/neuron-features/collective-communication.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/general/arch/neuron-features/collective-communication.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/general/arch/neuron-features/collective-communication.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/general/arch/neuron-features/collective-communication.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/general/arch/neuron-features/collective-communication.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/general/arch/neuron-features/collective-communication.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/general/arch/neuron-features/collective-communication.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/general/arch/neuron-features/collective-communication.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/general/arch/neuron-features/collective-communication.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/general/arch/neuron-features/collective-communication.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/general/arch/neuron-features/collective-communication.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/general/arch/neuron-features/collective-communication.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/general/arch/neuron-features/collective-communication.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/general/arch/neuron-features/collective-communication.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/general/arch/neuron-features/collective-communication.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/general/arch/neuron-features/collective-communication.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/general/arch/neuron-features/collective-communication.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/general/arch/neuron-features/collective-communication.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/general/arch/neuron-features/collective-communication.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/general/arch/neuron-features/collective-communication.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/general/arch/neuron-features/collective-communication.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/general/arch/neuron-features/collective-communication.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/general/arch/neuron-features/collective-communication.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/general/arch/neuron-features/collective-communication.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/general/arch/neuron-features/collective-communication.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/general/arch/neuron-features/collective-communication.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/general/arch/neuron-features/collective-communication.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//general/arch/neuron-features/collective-communication.rst">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/arch/neuron-features/collective-communication.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/arch/neuron-features/collective-communication.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/general/arch/neuron-features/collective-communication.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#trn1-32xlarge-topology">
trn1.32xlarge topology
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#trn1-2xlarge-topology">
trn1.2xlarge topology
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#inf2-48xlarge-topology">
inf2.48xlarge topology
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#inf2-other-instance-sizes-topologies">
Inf2 other instance sizes topologies
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Neuron Collective Communication</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#trn1-32xlarge-topology">
trn1.32xlarge topology
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#trn1-2xlarge-topology">
trn1.2xlarge topology
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#inf2-48xlarge-topology">
inf2.48xlarge topology
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#inf2-other-instance-sizes-topologies">
Inf2 other instance sizes topologies
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="neuron-collective-communication">
<span id="feature-cccom"></span><h1>Neuron Collective Communication<a class="headerlink" href="#neuron-collective-communication" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#introduction" id="id1">Introduction</a></p></li>
<li><p><a class="reference internal" href="#trn1-32xlarge-topology" id="id2">trn1.32xlarge topology</a></p></li>
<li><p><a class="reference internal" href="#trn1-2xlarge-topology" id="id3">trn1.2xlarge topology</a></p></li>
<li><p><a class="reference internal" href="#inf2-48xlarge-topology" id="id4">inf2.48xlarge topology</a></p></li>
<li><p><a class="reference internal" href="#inf2-other-instance-sizes-topologies" id="id5">Inf2 other instance sizes topologies</a></p></li>
</ul>
</div>
<div class="section" id="introduction">
<h2><a class="toc-backref" href="#id1">Introduction</a><a class="headerlink" href="#introduction" title="Permalink to this headline">#</a></h2>
<p>Collective Communications is an integral component of distributed ML
training. Multiple training nodes exchange information during ML
training via Collective Communication operators such as all-reduce.
Neuron provides hardware support for the execution of Collective
Communication with the Neuron SDK responsible for the hardware
configuration and for the execution orchestration. Neuron provides the
following Collective Communication operators:</p>
<ul class="simple">
<li><p>all-reduce</p></li>
<li><p>all-gather</p></li>
<li><p>reduce-scatter</p></li>
</ul>
<p>Neuron also provides the following peer to peer operators:</p>
<ul class="simple">
<li><p>send</p></li>
<li><p>receive</p></li>
</ul>
<p>Support for additional Collective Communication operators might be added
in future releases. Neuron devices are connected via NeuronLinks within
a single instance and EFA links between instances. All NeuronLinks
transfer the data directly between Neuron device and between Neuron
devices and EFA devices bypassing the host to achieve high bandwidth and
low latency.</p>
<p>Collective Communication support on Neuron requires installation of 3
separate packages:</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-runtime-lib</span></code> - supports execution on Neuron, not
specific to Collective Communication and is always required</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">aws-neuronx-collectives</span></code> - supports Collective Communication
execution on a single instance and on multiple instances.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">efa_installer</span></code> - low level libraries and drivers to support
Collective Communication execution over EFA, required to support
Collective Communication on multiple instances.</p></li>
</ul>
<p>ML models need to be compiled by the Neuron compiler before they can be
executed on Neuron devices. The result of the compilation is a binary
object containing computational instruction and data movement
instructions. Any Collective Communication operators encountered during
compilation are converted to the place holder instructions to be filled
by the runtime/collectives libraries during load and execution. This
approach allows Neuron compiler to be unaware of the specific physical
topology connecting Neuron devices. Once a compiled mode is placed on
Neuron devices the runtime/collectives libraries generate the
appropriate data movement instructions based on the placement. For
example, a different set of instructions is generated when the next rank
is connected via NeuronLinks or via EFA. Neuron executes Collective
Communication operators using dedicated hardware that is not shared with
computational resources. That allows Neuron to execute compute and
communication in parallel. For example Neuron can all-reduce gradients
of one layer while the gradients for another layer are computed.
Overlapping compute and communication can result is lower latency and
higher performance.</p>
</div>
<div class="section" id="trn1-32xlarge-topology">
<span id="trn132xlarge-topology"></span><h2><a class="toc-backref" href="#id2">trn1.32xlarge topology</a><a class="headerlink" href="#trn1-32xlarge-topology" title="Permalink to this headline">#</a></h2>
<img alt="../../../_images/trn1-topology.png" src="../../../_images/trn1-topology.png">
<p><strong>Trn1.32xl 2D torus topology</strong></p>
<p>On a single trn1.32xlarge instance Neuron devices are connected in a 2D
torus topology supporting Collective Communication operators in sets of
2, 8 and 32 ranks. Other set sizes might be supported in future
releases. A single instance topology can be further extended across
multiple instances using EFA NeuronLinks.</p>
<p>For example an 8x4 topology on a single instance, such as 8 rank tensor
parallel and 4 ranks data parallel can be extended across multiple
instances creating a large tensor/data parallel training cluster.</p>
</div>
<div class="section" id="trn1-2xlarge-topology">
<span id="trn12xlarge-topology"></span><h2><a class="toc-backref" href="#id3">trn1.2xlarge topology</a><a class="headerlink" href="#trn1-2xlarge-topology" title="Permalink to this headline">#</a></h2>
<p>Trn1.2xlarge instance type contains a single Neuron device with two
NeuronCores. This instance type supports only 2 ranks Collective
Communication operators. EFA is not available on trn1.2xlarge and the
ranks cannot be extended beyond a single instance.</p>
</div>
<div class="section" id="inf2-48xlarge-topology">
<span id="inf248xlarge-topology"></span><h2><a class="toc-backref" href="#id4">inf2.48xlarge topology</a><a class="headerlink" href="#inf2-48xlarge-topology" title="Permalink to this headline">#</a></h2>
<img alt="../../../_images/inf248xl-topology.png" src="../../../_images/inf248xl-topology.png">
<p><strong>inf2.48xlarge topology</strong></p>
<p>On inf2.48xlarge instance Neuron devices are connected in a ring via
NeuronLink. Any <strong>even</strong> number of ranks for Collective
Communication operators is supported provided that the ranks occupy
consecutive Neuron devices. However, when using any number of ranks
other than 24 (full instance) full performance of the ring is not utilized.</p>
</div>
<div class="section" id="inf2-other-instance-sizes-topologies">
<h2><a class="toc-backref" href="#id5">Inf2 other instance sizes topologies</a><a class="headerlink" href="#inf2-other-instance-sizes-topologies" title="Permalink to this headline">#</a></h2>
<img alt="../../../_images/inf224xl-topology.png" src="../../../_images/inf224xl-topology.png">
<p><strong>inf2 other instance sizes topologies</strong></p>
<p>On other inf2 instance sizes Neuron devices are connected bi-directionally.
Any <strong>even</strong> number of ranks for Collective Communication operators is
supported provided that the ranks occupy consecutive Neuron devices.
Collective Communication performance is similar to the performance on
inf2.48xlarge when fewer than 24 ranks are used.</p>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="neuroncore-pipeline.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">NeuronCore Pipeline</p>
</div>
</a>
<a class="right-next" id="next-link" href="control-flow.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Neuron Control Flow</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:51.860Z
|
Running ResNet50 on Inferentia — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html
|
# Running ResNet50 on Inferentia — AWS Neuron Documentation
Toggle in-page Table of Contents
Contents
- [Note: this tutorial runs on tensorflow-neuron 1.x only](#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only)
- [Introduction:](#Introduction:)
- [Compile for Neuron](#Compile-for-Neuron)
- [Deploy on Inferentia](#Deploy-on-Inferentia)
- [After downloading the example image, run the inference.](#After-downloading-the-example-image,-run-the-inference.)
## Contents
- [Note: this tutorial runs on tensorflow-neuron 1.x only](#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only)
- [Introduction:](#Introduction:)
- [Compile for Neuron](#Compile-for-Neuron)
- [Deploy on Inferentia](#Deploy-on-Inferentia)
- [After downloading the example image, run the inference.](#After-downloading-the-example-image,-run-the-inference.)
## Running ResNet50 on Inferentia[#](#Running-ResNet50-on-Inferentia "Permalink to this headline")
## Note: this tutorial runs on tensorflow-neuron 1.x only[#](#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only "Permalink to this headline")
## Introduction:[#](#Introduction: "Permalink to this headline")
In this tutorial we will compile and deploy ResNet50 model for Inferentia. In this tutorial we provide two main sections: 1. Compile the ResNet50 model. 2. Infer the same compiled model.
Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the [Tensorflow Installation Guide](../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow). You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.
Instructions of how to setup Neuron Tensorflow environment and run the tutorial as a Jupyter notebook are available in the [Tensorflow Quick Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.html#tensorflow-tutorial-setup)
## Compile for Neuron[#](#Compile-for-Neuron "Permalink to this headline")
A trained model must be compiled to Inferentia target before it can be deployed on Inferentia instances. In this step we compile the Keras ResNet50 model and export it as a SavedModel which is an interchange format for TensorFlow models. At the end of compilation, the compiled SavedModel is saved in resnet50\_neuron local directory:
```
import os
import time
import shutil
import tensorflow as tf
import tensorflow.neuron as tfn
import tensorflow.compat.v1.keras as keras
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.applications.resnet50 import preprocess_input
# Create a workspace
WORKSPACE = './ws_resnet50'
os.makedirs(WORKSPACE, exist_ok=True)
# Prepare export directory (old one removed)
model_dir = os.path.join(WORKSPACE, 'resnet50')
compiled_model_dir = os.path.join(WORKSPACE, 'resnet50_neuron')
shutil.rmtree(model_dir, ignore_errors=True)
shutil.rmtree(compiled_model_dir, ignore_errors=True)
# Instantiate Keras ResNet50 model
keras.backend.set_learning_phase(0)
keras.backend.set_image_data_format('channels_last')
model = ResNet50(weights='imagenet')
# Export SavedModel
tf.saved_model.simple_save(
session = keras.backend.get_session(),
export_dir = model_dir,
inputs = {'input': model.inputs[0]},
outputs = {'output': model.outputs[0]})
# Compile using Neuron
tfn.saved_model.compile(model_dir, compiled_model_dir)
```
## Deploy on Inferentia[#](#Deploy-on-Inferentia "Permalink to this headline")
Using same instance to deploy the model. In case of different deployment instance, launch a deployment inf1 instance and copy compiled model to the deployment inf1 instance.
Download the example image, and install pillow module for inference on deployement instance
```
!curl -O https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg
!pip install pillow # Necessary for loading images
```
### After downloading the example image, run the inference.[#](#After-downloading-the-example-image,-run-the-inference. "Permalink to this headline")
```
import os
import time
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications import resnet50
tf.keras.backend.set_image_data_format('channels_last')
# Create input from image
img_sgl = image.load_img('kitten_small.jpg', target_size=(224, 224))
img_arr = image.img_to_array(img_sgl)
img_arr2 = np.expand_dims(img_arr, axis=0)
img_arr3 = resnet50.preprocess_input(img_arr2)
# Load model
COMPILED_MODEL_DIR = './ws_resnet50/resnet50_neuron/'
predictor_inferentia = tf.contrib.predictor.from_saved_model(COMPILED_MODEL_DIR)
# Run inference
model_feed_dict={'input': img_arr3}
infa_rslts = predictor_inferentia(model_feed_dict);
# Display results
print(resnet50.decode_predictions(infa_rslts["output"], top=5)[0])
# Sample output will look like below:
#[('n02123045', 'tabby', 0.68817204), ('n02127052', 'lynx', 0.12701613), ('n02123159', 'tiger_cat', 0.08736559), ('n02124075', 'Egyptian_cat', 0.063844085), ('n02128757', 'snow_leopard', 0.009240591)]
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Running ResNet50 on Inferentia — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="Working with YOLO v4 using AWS Neuron SDK" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.html">
<link rel="prev" title="Running OpenPose on Inferentia" href="../openpose_demo/openpose.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/tensorflow/tensorflow_resnet50/resnet50", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/src/examples/tensorflow/tensorflow_resnet50/resnet50.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/tensorflow/tensorflow_resnet50/resnet50.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-for-Neuron">
Compile for Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-on-Inferentia">
Deploy on Inferentia
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#After-downloading-the-example-image,-run-the-inference.">
After downloading the example image, run the inference.
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Running ResNet50 on Inferentia</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-for-Neuron">
Compile for Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-on-Inferentia">
Deploy on Inferentia
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#After-downloading-the-example-image,-run-the-inference.">
After downloading the example image, run the inference.
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Running-ResNet50-on-Inferentia">
<h1>Running ResNet50 on Inferentia<a class="headerlink" href="#Running-ResNet50-on-Inferentia" title="Permalink to this headline">#</a></h1>
<div class="section" id="Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
<h2>Note: this tutorial runs on tensorflow-neuron 1.x only<a class="headerlink" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only" title="Permalink to this headline">#</a></h2>
</div>
<div class="section" id="Introduction:">
<h2>Introduction:<a class="headerlink" href="#Introduction:" title="Permalink to this headline">#</a></h2>
<p>In this tutorial we will compile and deploy ResNet50 model for Inferentia. In this tutorial we provide two main sections: 1. Compile the ResNet50 model. 2. Infer the same compiled model.</p>
<p>Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the <a class="reference external" href="../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow">Tensorflow Installation Guide</a>. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.</p>
<p>Instructions of how to setup Neuron Tensorflow environment and run the tutorial as a Jupyter notebook are available in the <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/tutorials/tensorflow-tutorial-setup.html#tensorflow-tutorial-setup">Tensorflow Quick Setup</a></p>
</div>
<div class="section" id="Compile-for-Neuron">
<h2>Compile for Neuron<a class="headerlink" href="#Compile-for-Neuron" title="Permalink to this headline">#</a></h2>
<p>A trained model must be compiled to Inferentia target before it can be deployed on Inferentia instances. In this step we compile the Keras ResNet50 model and export it as a SavedModel which is an interchange format for TensorFlow models. At the end of compilation, the compiled SavedModel is saved in resnet50_neuron local directory:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">shutil</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span>
<span class="kn">import</span> <span class="nn">tensorflow.compat.v1.keras</span> <span class="k">as</span> <span class="nn">keras</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">ResNet50</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">preprocess_input</span>
<span class="c1"># Create a workspace</span>
<span class="n">WORKSPACE</span> <span class="o">=</span> <span class="s1">'./ws_resnet50'</span>
<span class="n">os</span><span class="o">.</span><span class="n">makedirs</span><span class="p">(</span><span class="n">WORKSPACE</span><span class="p">,</span> <span class="n">exist_ok</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="c1"># Prepare export directory (old one removed)</span>
<span class="n">model_dir</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">WORKSPACE</span><span class="p">,</span> <span class="s1">'resnet50'</span><span class="p">)</span>
<span class="n">compiled_model_dir</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">WORKSPACE</span><span class="p">,</span> <span class="s1">'resnet50_neuron'</span><span class="p">)</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">rmtree</span><span class="p">(</span><span class="n">model_dir</span><span class="p">,</span> <span class="n">ignore_errors</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">rmtree</span><span class="p">(</span><span class="n">compiled_model_dir</span><span class="p">,</span> <span class="n">ignore_errors</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="c1"># Instantiate Keras ResNet50 model</span>
<span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_learning_phase</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_image_data_format</span><span class="p">(</span><span class="s1">'channels_last'</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">ResNet50</span><span class="p">(</span><span class="n">weights</span><span class="o">=</span><span class="s1">'imagenet'</span><span class="p">)</span>
<span class="c1"># Export SavedModel</span>
<span class="n">tf</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">simple_save</span><span class="p">(</span>
<span class="n">session</span> <span class="o">=</span> <span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">get_session</span><span class="p">(),</span>
<span class="n">export_dir</span> <span class="o">=</span> <span class="n">model_dir</span><span class="p">,</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'input'</span><span class="p">:</span> <span class="n">model</span><span class="o">.</span><span class="n">inputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]},</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'output'</span><span class="p">:</span> <span class="n">model</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]})</span>
<span class="c1"># Compile using Neuron</span>
<span class="n">tfn</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">model_dir</span><span class="p">,</span> <span class="n">compiled_model_dir</span><span class="p">)</span>
<br></pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>ls
</pre></div>
</div>
</div>
</div>
<div class="section" id="Deploy-on-Inferentia">
<h2>Deploy on Inferentia<a class="headerlink" href="#Deploy-on-Inferentia" title="Permalink to this headline">#</a></h2>
<p>Using same instance to deploy the model. In case of different deployment instance, launch a deployment inf1 instance and copy compiled model to the deployment inf1 instance.</p>
<p>Download the example image, and install pillow module for inference on deployement instance</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>curl<span class="w"> </span>-O<span class="w"> </span>https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg
<span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span>pillow<span class="w"> </span>#<span class="w"> </span>Necessary<span class="w"> </span><span class="k">for</span><span class="w"> </span>loading<span class="w"> </span>images
</pre></div>
</div>
</div>
<div class="section" id="After-downloading-the-example-image,-run-the-inference.">
<h3>After downloading the example image, run the inference.<a class="headerlink" href="#After-downloading-the-example-image,-run-the-inference." title="Permalink to this headline">#</a></h3>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.preprocessing</span> <span class="kn">import</span> <span class="n">image</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications</span> <span class="kn">import</span> <span class="n">resnet50</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_image_data_format</span><span class="p">(</span><span class="s1">'channels_last'</span><span class="p">)</span>
<span class="c1"># Create input from image</span>
<span class="n">img_sgl</span> <span class="o">=</span> <span class="n">image</span><span class="o">.</span><span class="n">load_img</span><span class="p">(</span><span class="s1">'kitten_small.jpg'</span><span class="p">,</span> <span class="n">target_size</span><span class="o">=</span><span class="p">(</span><span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">))</span>
<span class="n">img_arr</span> <span class="o">=</span> <span class="n">image</span><span class="o">.</span><span class="n">img_to_array</span><span class="p">(</span><span class="n">img_sgl</span><span class="p">)</span>
<span class="n">img_arr2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">img_arr</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span>
<span class="n">img_arr3</span> <span class="o">=</span> <span class="n">resnet50</span><span class="o">.</span><span class="n">preprocess_input</span><span class="p">(</span><span class="n">img_arr2</span><span class="p">)</span>
<span class="c1"># Load model</span>
<span class="n">COMPILED_MODEL_DIR</span> <span class="o">=</span> <span class="s1">'./ws_resnet50/resnet50_neuron/'</span>
<span class="n">predictor_inferentia</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">predictor</span><span class="o">.</span><span class="n">from_saved_model</span><span class="p">(</span><span class="n">COMPILED_MODEL_DIR</span><span class="p">)</span>
<span class="c1"># Run inference</span>
<span class="n">model_feed_dict</span><span class="o">=</span><span class="p">{</span><span class="s1">'input'</span><span class="p">:</span> <span class="n">img_arr3</span><span class="p">}</span>
<span class="n">infa_rslts</span> <span class="o">=</span> <span class="n">predictor_inferentia</span><span class="p">(</span><span class="n">model_feed_dict</span><span class="p">);</span>
<span class="c1"># Display results</span>
<span class="nb">print</span><span class="p">(</span><span class="n">resnet50</span><span class="o">.</span><span class="n">decode_predictions</span><span class="p">(</span><span class="n">infa_rslts</span><span class="p">[</span><span class="s2">"output"</span><span class="p">],</span> <span class="n">top</span><span class="o">=</span><span class="mi">5</span><span class="p">)[</span><span class="mi">0</span><span class="p">])</span>
<span class="c1"># Sample output will look like below:</span>
<span class="c1">#[('n02123045', 'tabby', 0.68817204), ('n02127052', 'lynx', 0.12701613), ('n02123159', 'tiger_cat', 0.08736559), ('n02124075', 'Egyptian_cat', 0.063844085), ('n02128757', 'snow_leopard', 0.009240591)]</span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../openpose_demo/openpose.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Running OpenPose on Inferentia</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Working with YOLO v4 using AWS Neuron SDK</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:51.993Z
|
Evaluate YOLO v3 on Inferentia — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html
|
# Evaluate YOLO v3 on Inferentia — AWS Neuron Documentation
```
yolo_pred = tf.contrib.predictor.from_saved_model('./yolo_v3_coco_saved_model_neuron')
val_coco_root = './val2017'
val_annotate = './annotations/instances_val2017.json'
clsid2catid = {0: 1, 1: 2, 2: 3, 3: 4, 4: 5, 5: 6, 6: 7, 7: 8, 8: 9, 9: 10, 10: 11, 11: 13, 12: 14, 13: 15, 14: 16,
15: 17, 16: 18, 17: 19, 18: 20, 19: 21, 20: 22, 21: 23, 22: 24, 23: 25, 24: 27, 25: 28, 26: 31,
27: 32, 28: 33, 29: 34, 30: 35, 31: 36, 32: 37, 33: 38, 34: 39, 35: 40, 36: 41, 37: 42, 38: 43,
39: 44, 40: 46, 41: 47, 42: 48, 43: 49, 44: 50, 45: 51, 46: 52, 47: 53, 48: 54, 49: 55, 50: 56,
51: 57, 52: 58, 53: 59, 54: 60, 55: 61, 56: 62, 57: 63, 58: 64, 59: 65, 60: 67, 61: 70, 62: 72,
63: 73, 64: 74, 65: 75, 66: 76, 67: 77, 68: 78, 69: 79, 70: 80, 71: 81, 72: 82, 73: 84, 74: 85,
75: 86, 76: 87, 77: 88, 78: 89, 79: 90}
eval_batch_size = 8
with open(val_annotate, 'r', encoding='utf-8') as f2:
for line in f2:
line = line.strip()
dataset = json.loads(line)
images = dataset['images']
box_ap = evaluate(yolo_pred, images, val_coco_root, val_annotate, eval_batch_size, clsid2catid)
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Evaluate YOLO v3 on Inferentia — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="Running SSD300 with AWS Neuron" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.html">
<link rel="prev" title="Working with YOLO v4 using AWS Neuron SDK" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/tensorflow/yolo_v3_demo/yolo_v3", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/tensorflow/yolo_v3_demo/yolo_v3.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Prerequisites">
Prerequisites
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel">
Part 1: Download Dataset and Generate Pretrained SavedModel
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-COCO-2017-validation-dataset">
Download COCO 2017 validation dataset
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Generate-YOLO-v3-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)">
Generate YOLO v3 tensorflow SavedModel (pretrained on COCO 2017 dataset)
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-2:-Compile-the-Pretrained-SavedModel-for-Neuron">
Part 2: Compile the Pretrained SavedModel for Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-the-model-on-Inferentia">
Deploy the model on Inferentia
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-3:Evaluate-Model-Quality-after-Compilation">
Part 3:Evaluate Model Quality after Compilation
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Define-evaluation-functions">
Define evaluation functions
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Evaluate-mean-average-precision-(mAP)-score">
Evaluate mean average precision (mAP) score
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Evaluate YOLO v3 on Inferentia</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
Note: this tutorial runs on tensorflow-neuron 1.x only
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Prerequisites">
Prerequisites
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel">
Part 1: Download Dataset and Generate Pretrained SavedModel
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-COCO-2017-validation-dataset">
Download COCO 2017 validation dataset
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Generate-YOLO-v3-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)">
Generate YOLO v3 tensorflow SavedModel (pretrained on COCO 2017 dataset)
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-2:-Compile-the-Pretrained-SavedModel-for-Neuron">
Part 2: Compile the Pretrained SavedModel for Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-the-model-on-Inferentia">
Deploy the model on Inferentia
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Part-3:Evaluate-Model-Quality-after-Compilation">
Part 3:Evaluate Model Quality after Compilation
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Define-evaluation-functions">
Define evaluation functions
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Evaluate-mean-average-precision-(mAP)-score">
Evaluate mean average precision (mAP) score
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Evaluate-YOLO-v3-on-Inferentia">
<h1>Evaluate YOLO v3 on Inferentia<a class="headerlink" href="#Evaluate-YOLO-v3-on-Inferentia" title="Permalink to this headline">#</a></h1>
<div class="section" id="Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only">
<h2>Note: this tutorial runs on tensorflow-neuron 1.x only<a class="headerlink" href="#Note:-this-tutorial-runs-on-tensorflow-neuron-1.x-only" title="Permalink to this headline">#</a></h2>
</div>
<div class="section" id="Introduction">
<h2>Introduction<a class="headerlink" href="#Introduction" title="Permalink to this headline">#</a></h2>
<p>This tutorial walks through compiling and evaluating YOLO v3 model on Inferentia using the AWS Neuron SDK.</p>
<p>In this tutorial we provide two main sections:</p>
<ol class="arabic simple">
<li><p>Download Dataset and Generate Pretrained SavedModel</p></li>
<li><p>Compile the YOLO v3 model.</p></li>
<li><p>Deploy the same compiled model.</p></li>
</ol>
<p>Verify that this Jupyter notebook is running the Python kernel environment that was set up according to the <a class="reference external" href="../../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow">Tensorflow Installation Guide</a>. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page.</p>
<p>Instructions of how to setup Neuron Tensorflow environment and run the tutorial as a Jupyter notebook are available in the Tutorial main page <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v3_demo/yolo_v3_demo.html">Tensorflow-YOLO_v3 Tutorial</a></p>
</div>
<div class="section" id="Prerequisites">
<h2>Prerequisites<a class="headerlink" href="#Prerequisites" title="Permalink to this headline">#</a></h2>
<p>This demo requires the following pip packages:</p>
<p><code class="docutils literal notranslate"><span class="pre">pillow</span> <span class="pre">matplotlib</span> <span class="pre">pycocotools</span></code></p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><br><span></span><span class="kn">import</span> <span class="nn">sys</span>
<span class="o">!{</span>sys.executable<span class="o">}</span><span class="w"> </span>-m<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>pillow<span class="w"> </span>matplotlib<span class="w"> </span><span class="nv">pycocotools</span><span class="o">==</span><span class="m">2</span>.0.2<span class="w"> </span>--force<span class="w"> </span>--extra-index-url<span class="o">=</span>https://pip.repos.neuron.amazonaws.com
</pre></div>
</div>
</div>
</div>
<div class="section" id="Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel">
<h2>Part 1: Download Dataset and Generate Pretrained SavedModel<a class="headerlink" href="#Part-1:-Download-Dataset-and-Generate-Pretrained-SavedModel" title="Permalink to this headline">#</a></h2>
<div class="section" id="Download-COCO-2017-validation-dataset">
<h3>Download COCO 2017 validation dataset<a class="headerlink" href="#Download-COCO-2017-validation-dataset" title="Permalink to this headline">#</a></h3>
<p>We start by downloading the COCO validation dataset, which we will use to validate our model. The COCO 2017 dataset is widely used for object-detection, segmentation and image captioning.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>curl<span class="w"> </span>-LO<span class="w"> </span>http://images.cocodataset.org/zips/val2017.zip
<span class="o">!</span>curl<span class="w"> </span>-LO<span class="w"> </span>http://images.cocodataset.org/annotations/annotations_trainval2017.zip
<span class="o">!</span>unzip<span class="w"> </span>-q<span class="w"> </span>val2017.zip
<span class="o">!</span>unzip<span class="w"> </span>annotations_trainval2017.zip
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>ls
</pre></div>
</div>
</div>
</div>
</div>
<div class="section" id="Generate-YOLO-v3-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)">
<h2>Generate YOLO v3 tensorflow SavedModel (pretrained on COCO 2017 dataset)<a class="headerlink" href="#Generate-YOLO-v3-tensorflow-SavedModel-(pretrained-on-COCO-2017-dataset)" title="Permalink to this headline">#</a></h2>
<p>Script yolo_v3_coco_saved_model.py will generate a tensorflow SavedModel using pretrained weights from <a class="reference external" href="https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz">https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz</a>.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%</span><span class="k">run</span> yolo_v3_coco_saved_model.py ./yolo_v3_coco_saved_model
</pre></div>
</div>
</div>
<p>This tensorflow SavedModel can be loaded as a tensorflow predictor. When a JPEG format image is provided as input, the output result of the tensorflow predictor contains information for drawing bounding boxes and classification results.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">json</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">PIL</span> <span class="kn">import</span> <span class="n">Image</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="nn">plt</span>
<span class="kn">import</span> <span class="nn">matplotlib.patches</span> <span class="k">as</span> <span class="nn">patches</span>
<span class="c1"># launch predictor and run inference on an arbitrary image in the validation dataset</span>
<span class="n">yolo_pred_cpu</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">predictor</span><span class="o">.</span><span class="n">from_saved_model</span><span class="p">(</span><span class="s1">'./yolo_v3_coco_saved_model'</span><span class="p">)</span>
<span class="n">image_path</span> <span class="o">=</span> <span class="s1">'./val2017/000000581781.jpg'</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">image_path</span><span class="p">,</span> <span class="s1">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">feeds</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'image'</span><span class="p">:</span> <span class="p">[</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">()]}</span>
<span class="n">results</span> <span class="o">=</span> <span class="n">yolo_pred_cpu</span><span class="p">(</span><span class="n">feeds</span><span class="p">)</span>
<span class="c1"># load annotations to decode classification result</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'./annotations/instances_val2017.json'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">annotate_json</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">f</span><span class="p">)</span>
<span class="n">label_info</span> <span class="o">=</span> <span class="p">{</span><span class="n">idx</span><span class="o">+</span><span class="mi">1</span><span class="p">:</span> <span class="n">cat</span><span class="p">[</span><span class="s1">'name'</span><span class="p">]</span> <span class="k">for</span> <span class="n">idx</span><span class="p">,</span> <span class="n">cat</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">annotate_json</span><span class="p">[</span><span class="s1">'categories'</span><span class="p">])}</span>
<span class="c1"># draw picture and bounding boxes</span>
<span class="n">fig</span><span class="p">,</span> <span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="o">.</span><span class="n">subplots</span><span class="p">(</span><span class="n">figsize</span><span class="o">=</span><span class="p">(</span><span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">))</span>
<span class="n">ax</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">Image</span><span class="o">.</span><span class="n">open</span><span class="p">(</span><span class="n">image_path</span><span class="p">)</span><span class="o">.</span><span class="n">convert</span><span class="p">(</span><span class="s1">'RGB'</span><span class="p">))</span>
<span class="n">wanted</span> <span class="o">=</span> <span class="n">results</span><span class="p">[</span><span class="s1">'scores'</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span> <span class="o">></span> <span class="mf">0.1</span>
<span class="k">for</span> <span class="n">xyxy</span><span class="p">,</span> <span class="n">label_no_bg</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">results</span><span class="p">[</span><span class="s1">'boxes'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="n">wanted</span><span class="p">],</span> <span class="n">results</span><span class="p">[</span><span class="s1">'classes'</span><span class="p">][</span><span class="mi">0</span><span class="p">][</span><span class="n">wanted</span><span class="p">]):</span>
<span class="n">xywh</span> <span class="o">=</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="o">-</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">3</span><span class="p">]</span> <span class="o">-</span> <span class="n">xyxy</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="n">rect</span> <span class="o">=</span> <span class="n">patches</span><span class="o">.</span><span class="n">Rectangle</span><span class="p">((</span><span class="n">xywh</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">xywh</span><span class="p">[</span><span class="mi">1</span><span class="p">]),</span> <span class="n">xywh</span><span class="p">[</span><span class="mi">2</span><span class="p">],</span> <span class="n">xywh</span><span class="p">[</span><span class="mi">3</span><span class="p">],</span> <span class="n">linewidth</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">edgecolor</span><span class="o">=</span><span class="s1">'g'</span><span class="p">,</span> <span class="n">facecolor</span><span class="o">=</span><span class="s1">'none'</span><span class="p">)</span>
<span class="n">ax</span><span class="o">.</span><span class="n">add_patch</span><span class="p">(</span><span class="n">rect</span><span class="p">)</span>
<span class="n">rx</span><span class="p">,</span> <span class="n">ry</span> <span class="o">=</span> <span class="n">rect</span><span class="o">.</span><span class="n">get_xy</span><span class="p">()</span>
<span class="n">rx</span> <span class="o">=</span> <span class="n">rx</span> <span class="o">+</span> <span class="n">rect</span><span class="o">.</span><span class="n">get_width</span><span class="p">()</span> <span class="o">/</span> <span class="mf">2.0</span>
<span class="n">ax</span><span class="o">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">label_info</span><span class="p">[</span><span class="n">label_no_bg</span> <span class="o">+</span> <span class="mi">1</span><span class="p">],</span> <span class="p">(</span><span class="n">rx</span><span class="p">,</span> <span class="n">ry</span><span class="p">),</span> <span class="n">color</span><span class="o">=</span><span class="s1">'w'</span><span class="p">,</span> <span class="n">backgroundcolor</span><span class="o">=</span><span class="s1">'g'</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="mi">10</span><span class="p">,</span>
<span class="n">ha</span><span class="o">=</span><span class="s1">'center'</span><span class="p">,</span> <span class="n">va</span><span class="o">=</span><span class="s1">'center'</span><span class="p">,</span> <span class="n">bbox</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">boxstyle</span><span class="o">=</span><span class="s1">'square,pad=0.01'</span><span class="p">,</span> <span class="n">fc</span><span class="o">=</span><span class="s1">'g'</span><span class="p">,</span> <span class="n">ec</span><span class="o">=</span><span class="s1">'none'</span><span class="p">,</span> <span class="n">alpha</span><span class="o">=</span><span class="mf">0.5</span><span class="p">))</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Part-2:-Compile-the-Pretrained-SavedModel-for-Neuron">
<h2>Part 2: Compile the Pretrained SavedModel for Neuron<a class="headerlink" href="#Part-2:-Compile-the-Pretrained-SavedModel-for-Neuron" title="Permalink to this headline">#</a></h2>
<p>We make use of the Python compilation API <code class="docutils literal notranslate"><span class="pre">tfn.saved_model.compile</span></code> that is available in <code class="docutils literal notranslate"><span class="pre">tensorflow-neuron<2</span></code>. For the purpose of reducing Neuron runtime overhead, it is necessary to make use of arguments <code class="docutils literal notranslate"><span class="pre">no_fuse_ops</span></code> and <code class="docutils literal notranslate"><span class="pre">minimum_segment_size</span></code>. Compiled model is saved in ./yolo_v3_coco_saved_model_neuron.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">shutil</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span>
<span class="k">def</span> <span class="nf">no_fuse_condition</span><span class="p">(</span><span class="n">op</span><span class="p">):</span>
<span class="k">return</span> <span class="n">op</span><span class="o">.</span><span class="n">name</span><span class="o">.</span><span class="n">startswith</span><span class="p">(</span><span class="s1">'Preprocessor'</span><span class="p">)</span> <span class="ow">or</span> <span class="n">op</span><span class="o">.</span><span class="n">name</span><span class="o">.</span><span class="n">startswith</span><span class="p">(</span><span class="s1">'Postprocessor'</span><span class="p">)</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">(</span><span class="n">graph</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">Graph</span><span class="p">())</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">tf</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">loader</span><span class="o">.</span><span class="n">load</span><span class="p">(</span><span class="n">sess</span><span class="p">,</span> <span class="p">[</span><span class="s1">'serve'</span><span class="p">],</span> <span class="s1">'./yolo_v3_coco_saved_model'</span><span class="p">)</span>
<span class="n">no_fuse_ops</span> <span class="o">=</span> <span class="p">[</span><span class="n">op</span><span class="o">.</span><span class="n">name</span> <span class="k">for</span> <span class="n">op</span> <span class="ow">in</span> <span class="n">sess</span><span class="o">.</span><span class="n">graph</span><span class="o">.</span><span class="n">get_operations</span><span class="p">()</span> <span class="k">if</span> <span class="n">no_fuse_condition</span><span class="p">(</span><span class="n">op</span><span class="p">)]</span>
<span class="n">shutil</span><span class="o">.</span><span class="n">rmtree</span><span class="p">(</span><span class="s1">'./yolo_v3_coco_saved_model_neuron'</span><span class="p">,</span> <span class="n">ignore_errors</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span>
<span class="s1">'./yolo_v3_coco_saved_model'</span><span class="p">,</span> <span class="s1">'./yolo_v3_coco_saved_model_neuron'</span><span class="p">,</span>
<span class="c1"># to enforce trivial compilable subgraphs to run on CPU</span>
<span class="n">no_fuse_ops</span><span class="o">=</span><span class="n">no_fuse_ops</span><span class="p">,</span>
<span class="n">minimum_segment_size</span><span class="o">=</span><span class="mi">100</span><span class="p">,</span>
<span class="n">batch_size</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span>
<span class="n">dynamic_batch_size</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
<span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">result</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Deploy-the-model-on-Inferentia">
<h2>Deploy the model on Inferentia<a class="headerlink" href="#Deploy-the-model-on-Inferentia" title="Permalink to this headline">#</a></h2>
</div>
<div class="section" id="Part-3:Evaluate-Model-Quality-after-Compilation">
<h2>Part 3:Evaluate Model Quality after Compilation<a class="headerlink" href="#Part-3:Evaluate-Model-Quality-after-Compilation" title="Permalink to this headline">#</a></h2>
<div class="section" id="Define-evaluation-functions">
<h3>Define evaluation functions<a class="headerlink" href="#Define-evaluation-functions" title="Permalink to this headline">#</a></h3>
<p>We first define some handy helper functions for running evaluation on the COCO 2017 dataset.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">json</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">pycocotools.coco</span> <span class="kn">import</span> <span class="n">COCO</span>
<span class="kn">from</span> <span class="nn">pycocotools.cocoeval</span> <span class="kn">import</span> <span class="n">COCOeval</span>
<span class="k">def</span> <span class="nf">cocoapi_eval</span><span class="p">(</span><span class="n">jsonfile</span><span class="p">,</span>
<span class="n">style</span><span class="p">,</span>
<span class="n">coco_gt</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">anno_file</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span>
<span class="n">max_dets</span><span class="o">=</span><span class="p">(</span><span class="mi">100</span><span class="p">,</span> <span class="mi">300</span><span class="p">,</span> <span class="mi">1000</span><span class="p">)):</span>
<span class="w"> </span><span class="sd">"""</span>
<span class="sd"> Args:</span>
<span class="sd"> jsonfile: Evaluation json file, eg: bbox.json, mask.json.</span>
<span class="sd"> style: COCOeval style, can be `bbox` , `segm` and `proposal`.</span>
<span class="sd"> coco_gt: Whether to load COCOAPI through anno_file,</span>
<span class="sd"> eg: coco_gt = COCO(anno_file)</span>
<span class="sd"> anno_file: COCO annotations file.</span>
<span class="sd"> max_dets: COCO evaluation maxDets.</span>
<span class="sd"> """</span>
<span class="k">assert</span> <span class="n">coco_gt</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span> <span class="ow">or</span> <span class="n">anno_file</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span>
<span class="k">if</span> <span class="n">coco_gt</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">coco_gt</span> <span class="o">=</span> <span class="n">COCO</span><span class="p">(</span><span class="n">anno_file</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Start evaluate..."</span><span class="p">)</span>
<span class="n">coco_dt</span> <span class="o">=</span> <span class="n">coco_gt</span><span class="o">.</span><span class="n">loadRes</span><span class="p">(</span><span class="n">jsonfile</span><span class="p">)</span>
<span class="k">if</span> <span class="n">style</span> <span class="o">==</span> <span class="s1">'proposal'</span><span class="p">:</span>
<span class="n">coco_eval</span> <span class="o">=</span> <span class="n">COCOeval</span><span class="p">(</span><span class="n">coco_gt</span><span class="p">,</span> <span class="n">coco_dt</span><span class="p">,</span> <span class="s1">'bbox'</span><span class="p">)</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">params</span><span class="o">.</span><span class="n">useCats</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">params</span><span class="o">.</span><span class="n">maxDets</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="n">max_dets</span><span class="p">)</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">coco_eval</span> <span class="o">=</span> <span class="n">COCOeval</span><span class="p">(</span><span class="n">coco_gt</span><span class="p">,</span> <span class="n">coco_dt</span><span class="p">,</span> <span class="n">style</span><span class="p">)</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">evaluate</span><span class="p">()</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">accumulate</span><span class="p">()</span>
<span class="n">coco_eval</span><span class="o">.</span><span class="n">summarize</span><span class="p">()</span>
<span class="k">return</span> <span class="n">coco_eval</span><span class="o">.</span><span class="n">stats</span>
<span class="k">def</span> <span class="nf">bbox_eval</span><span class="p">(</span><span class="n">anno_file</span><span class="p">,</span> <span class="n">bbox_list</span><span class="p">):</span>
<span class="n">coco_gt</span> <span class="o">=</span> <span class="n">COCO</span><span class="p">(</span><span class="n">anno_file</span><span class="p">)</span>
<span class="n">outfile</span> <span class="o">=</span> <span class="s1">'bbox_detections.json'</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Generating json file...'</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">outfile</span><span class="p">,</span> <span class="s1">'w'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">json</span><span class="o">.</span><span class="n">dump</span><span class="p">(</span><span class="n">bbox_list</span><span class="p">,</span> <span class="n">f</span><span class="p">)</span>
<span class="n">map_stats</span> <span class="o">=</span> <span class="n">cocoapi_eval</span><span class="p">(</span><span class="n">outfile</span><span class="p">,</span> <span class="s1">'bbox'</span><span class="p">,</span> <span class="n">coco_gt</span><span class="o">=</span><span class="n">coco_gt</span><span class="p">)</span>
<span class="k">return</span> <span class="n">map_stats</span>
<span class="k">def</span> <span class="nf">get_image_as_bytes</span><span class="p">(</span><span class="n">images</span><span class="p">,</span> <span class="n">eval_pre_path</span><span class="p">):</span>
<span class="n">batch_im_id_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_name_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_img_bytes_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">n</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">images</span><span class="p">)</span>
<span class="n">batch_im_id</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_name</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_img_bytes</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span><span class="p">,</span> <span class="n">im</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">images</span><span class="p">):</span>
<span class="n">im_id</span> <span class="o">=</span> <span class="n">im</span><span class="p">[</span><span class="s1">'id'</span><span class="p">]</span>
<span class="n">file_name</span> <span class="o">=</span> <span class="n">im</span><span class="p">[</span><span class="s1">'file_name'</span><span class="p">]</span>
<span class="k">if</span> <span class="n">i</span> <span class="o">%</span> <span class="n">eval_batch_size</span> <span class="o">==</span> <span class="mi">0</span> <span class="ow">and</span> <span class="n">i</span> <span class="o">!=</span> <span class="mi">0</span><span class="p">:</span>
<span class="n">batch_im_id_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">batch_im_id</span><span class="p">)</span>
<span class="n">batch_im_name_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">batch_im_name</span><span class="p">)</span>
<span class="n">batch_img_bytes_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">batch_img_bytes</span><span class="p">)</span>
<span class="n">batch_im_id</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_name</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_img_bytes</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">batch_im_id</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">im_id</span><span class="p">)</span>
<span class="n">batch_im_name</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">file_name</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">eval_pre_path</span><span class="p">,</span> <span class="n">file_name</span><span class="p">),</span> <span class="s1">'rb'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">batch_img_bytes</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">f</span><span class="o">.</span><span class="n">read</span><span class="p">())</span>
<span class="k">return</span> <span class="n">batch_im_id_list</span><span class="p">,</span> <span class="n">batch_im_name_list</span><span class="p">,</span> <span class="n">batch_img_bytes_list</span>
<span class="k">def</span> <span class="nf">analyze_bbox</span><span class="p">(</span><span class="n">results</span><span class="p">,</span> <span class="n">batch_im_id</span><span class="p">,</span> <span class="n">_clsid2catid</span><span class="p">):</span>
<span class="n">bbox_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">boxes</span><span class="p">,</span> <span class="n">scores</span><span class="p">,</span> <span class="n">classes</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">results</span><span class="p">[</span><span class="s1">'boxes'</span><span class="p">],</span> <span class="n">results</span><span class="p">[</span><span class="s1">'scores'</span><span class="p">],</span> <span class="n">results</span><span class="p">[</span><span class="s1">'classes'</span><span class="p">]):</span>
<span class="k">if</span> <span class="n">boxes</span> <span class="ow">is</span> <span class="ow">not</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">im_id</span> <span class="o">=</span> <span class="n">batch_im_id</span><span class="p">[</span><span class="n">k</span><span class="p">]</span>
<span class="n">n</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">boxes</span><span class="p">)</span>
<span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="p">):</span>
<span class="n">clsid</span> <span class="o">=</span> <span class="n">classes</span><span class="p">[</span><span class="n">p</span><span class="p">]</span>
<span class="n">score</span> <span class="o">=</span> <span class="n">scores</span><span class="p">[</span><span class="n">p</span><span class="p">]</span>
<span class="n">xmin</span><span class="p">,</span> <span class="n">ymin</span><span class="p">,</span> <span class="n">xmax</span><span class="p">,</span> <span class="n">ymax</span> <span class="o">=</span> <span class="n">boxes</span><span class="p">[</span><span class="n">p</span><span class="p">]</span>
<span class="n">catid</span> <span class="o">=</span> <span class="p">(</span><span class="n">_clsid2catid</span><span class="p">[</span><span class="nb">int</span><span class="p">(</span><span class="n">clsid</span><span class="p">)])</span>
<span class="n">w</span> <span class="o">=</span> <span class="n">xmax</span> <span class="o">-</span> <span class="n">xmin</span> <span class="o">+</span> <span class="mi">1</span>
<span class="n">h</span> <span class="o">=</span> <span class="n">ymax</span> <span class="o">-</span> <span class="n">ymin</span> <span class="o">+</span> <span class="mi">1</span>
<span class="n">bbox</span> <span class="o">=</span> <span class="p">[</span><span class="n">xmin</span><span class="p">,</span> <span class="n">ymin</span><span class="p">,</span> <span class="n">w</span><span class="p">,</span> <span class="n">h</span><span class="p">]</span>
<span class="c1"># Round to the nearest 10th to avoid huge file sizes, as COCO suggests</span>
<span class="n">bbox</span> <span class="o">=</span> <span class="p">[</span><span class="nb">round</span><span class="p">(</span><span class="nb">float</span><span class="p">(</span><span class="n">x</span><span class="p">)</span> <span class="o">*</span> <span class="mi">10</span><span class="p">)</span> <span class="o">/</span> <span class="mi">10</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">bbox</span><span class="p">]</span>
<span class="n">bbox_res</span> <span class="o">=</span> <span class="p">{</span>
<span class="s1">'image_id'</span><span class="p">:</span> <span class="n">im_id</span><span class="p">,</span>
<span class="s1">'category_id'</span><span class="p">:</span> <span class="n">catid</span><span class="p">,</span>
<span class="s1">'bbox'</span><span class="p">:</span> <span class="n">bbox</span><span class="p">,</span>
<span class="s1">'score'</span><span class="p">:</span> <span class="nb">float</span><span class="p">(</span><span class="n">score</span><span class="p">),</span>
<span class="p">}</span>
<span class="n">bbox_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">bbox_res</span><span class="p">)</span>
<span class="n">k</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">return</span> <span class="n">bbox_list</span>
</pre></div>
</div>
</div>
<p>Here is the actual evaluation loop. To fully utilize all four cores on one Inferentia, the optimal setup is to run multi-threaded inference using a <code class="docutils literal notranslate"><span class="pre">ThreadPoolExecutor</span></code>. The following cell is a multi-threaded adaptation of the evaluation routine at <a class="reference external" href="https://github.com/miemie2013/Keras-YOLOv4/blob/910c4c6f7265f5828fceed0f784496a0b46516bf/tools/cocotools.py#L97">https://github.com/miemie2013/Keras-YOLOv4/blob/910c4c6f7265f5828fceed0f784496a0b46516bf/tools/cocotools.py#L97</a>.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">concurrent</span> <span class="kn">import</span> <span class="n">futures</span>
<span class="k">def</span> <span class="nf">evaluate</span><span class="p">(</span><span class="n">yolo_predictor</span><span class="p">,</span> <span class="n">images</span><span class="p">,</span> <span class="n">eval_pre_path</span><span class="p">,</span> <span class="n">anno_file</span><span class="p">,</span> <span class="n">eval_batch_size</span><span class="p">,</span> <span class="n">_clsid2catid</span><span class="p">):</span>
<span class="n">batch_im_id_list</span><span class="p">,</span> <span class="n">batch_im_name_list</span><span class="p">,</span> <span class="n">batch_img_bytes_list</span> <span class="o">=</span> <span class="n">get_image_as_bytes</span><span class="p">(</span><span class="n">images</span><span class="p">,</span> <span class="n">eval_pre_path</span><span class="p">)</span>
<span class="c1"># warm up</span>
<span class="n">yolo_predictor</span><span class="p">({</span><span class="s1">'image'</span><span class="p">:</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">batch_img_bytes_list</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="nb">object</span><span class="p">)})</span>
<span class="k">with</span> <span class="n">futures</span><span class="o">.</span><span class="n">ThreadPoolExecutor</span><span class="p">(</span><span class="mi">4</span><span class="p">)</span> <span class="k">as</span> <span class="n">exe</span><span class="p">:</span>
<span class="n">fut_im_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">fut_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">start_time</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="k">for</span> <span class="n">batch_im_id</span><span class="p">,</span> <span class="n">batch_im_name</span><span class="p">,</span> <span class="n">batch_img_bytes</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">batch_im_id_list</span><span class="p">,</span> <span class="n">batch_im_name_list</span><span class="p">,</span> <span class="n">batch_img_bytes_list</span><span class="p">):</span>
<span class="k">if</span> <span class="nb">len</span><span class="p">(</span><span class="n">batch_img_bytes</span><span class="p">)</span> <span class="o">!=</span> <span class="n">eval_batch_size</span><span class="p">:</span>
<span class="k">continue</span>
<span class="n">fut</span> <span class="o">=</span> <span class="n">exe</span><span class="o">.</span><span class="n">submit</span><span class="p">(</span><span class="n">yolo_predictor</span><span class="p">,</span> <span class="p">{</span><span class="s1">'image'</span><span class="p">:</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">(</span><span class="n">batch_img_bytes</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="nb">object</span><span class="p">)})</span>
<span class="n">fut_im_list</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">batch_im_id</span><span class="p">,</span> <span class="n">batch_im_name</span><span class="p">))</span>
<span class="n">fut_list</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">fut</span><span class="p">)</span>
<span class="n">bbox_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="n">count</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="p">(</span><span class="n">batch_im_id</span><span class="p">,</span> <span class="n">batch_im_name</span><span class="p">),</span> <span class="n">fut</span> <span class="ow">in</span> <span class="nb">zip</span><span class="p">(</span><span class="n">fut_im_list</span><span class="p">,</span> <span class="n">fut_list</span><span class="p">):</span>
<span class="n">results</span> <span class="o">=</span> <span class="n">fut</span><span class="o">.</span><span class="n">result</span><span class="p">()</span>
<span class="n">bbox_list</span><span class="o">.</span><span class="n">extend</span><span class="p">(</span><span class="n">analyze_bbox</span><span class="p">(</span><span class="n">results</span><span class="p">,</span> <span class="n">batch_im_id</span><span class="p">,</span> <span class="n">_clsid2catid</span><span class="p">))</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">batch_im_id</span><span class="p">:</span>
<span class="n">count</span> <span class="o">+=</span> <span class="mi">1</span>
<span class="k">if</span> <span class="n">count</span> <span class="o">%</span> <span class="mi">100</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Test iter </span><span class="si">{}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">count</span><span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'==================== Performance Measurement ===================='</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'Finished inference on </span><span class="si">{}</span><span class="s1"> images in </span><span class="si">{}</span><span class="s1"> seconds'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">images</span><span class="p">),</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span> <span class="o">-</span> <span class="n">start_time</span><span class="p">))</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'================================================================='</span><span class="p">)</span>
<span class="c1"># start evaluation</span>
<span class="n">box_ap_stats</span> <span class="o">=</span> <span class="n">bbox_eval</span><span class="p">(</span><span class="n">anno_file</span><span class="p">,</span> <span class="n">bbox_list</span><span class="p">)</span>
<span class="k">return</span> <span class="n">box_ap_stats</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Evaluate-mean-average-precision-(mAP)-score">
<h3>Evaluate mean average precision (mAP) score<a class="headerlink" href="#Evaluate-mean-average-precision-(mAP)-score" title="Permalink to this headline">#</a></h3>
<p>Here is the code to calculate mAP scores of the YOLO v3 model. The expected mAP score is around 0.328 if we use the pretrained weights.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">yolo_pred</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">predictor</span><span class="o">.</span><span class="n">from_saved_model</span><span class="p">(</span><span class="s1">'./yolo_v3_coco_saved_model_neuron'</span><span class="p">)</span>
<span class="n">val_coco_root</span> <span class="o">=</span> <span class="s1">'./val2017'</span>
<span class="n">val_annotate</span> <span class="o">=</span> <span class="s1">'./annotations/instances_val2017.json'</span>
<span class="n">clsid2catid</span> <span class="o">=</span> <span class="p">{</span><span class="mi">0</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="p">:</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">2</span><span class="p">:</span> <span class="mi">3</span><span class="p">,</span> <span class="mi">3</span><span class="p">:</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">4</span><span class="p">:</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">5</span><span class="p">:</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">6</span><span class="p">:</span> <span class="mi">7</span><span class="p">,</span> <span class="mi">7</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">:</span> <span class="mi">9</span><span class="p">,</span> <span class="mi">9</span><span class="p">:</span> <span class="mi">10</span><span class="p">,</span> <span class="mi">10</span><span class="p">:</span> <span class="mi">11</span><span class="p">,</span> <span class="mi">11</span><span class="p">:</span> <span class="mi">13</span><span class="p">,</span> <span class="mi">12</span><span class="p">:</span> <span class="mi">14</span><span class="p">,</span> <span class="mi">13</span><span class="p">:</span> <span class="mi">15</span><span class="p">,</span> <span class="mi">14</span><span class="p">:</span> <span class="mi">16</span><span class="p">,</span>
<span class="mi">15</span><span class="p">:</span> <span class="mi">17</span><span class="p">,</span> <span class="mi">16</span><span class="p">:</span> <span class="mi">18</span><span class="p">,</span> <span class="mi">17</span><span class="p">:</span> <span class="mi">19</span><span class="p">,</span> <span class="mi">18</span><span class="p">:</span> <span class="mi">20</span><span class="p">,</span> <span class="mi">19</span><span class="p">:</span> <span class="mi">21</span><span class="p">,</span> <span class="mi">20</span><span class="p">:</span> <span class="mi">22</span><span class="p">,</span> <span class="mi">21</span><span class="p">:</span> <span class="mi">23</span><span class="p">,</span> <span class="mi">22</span><span class="p">:</span> <span class="mi">24</span><span class="p">,</span> <span class="mi">23</span><span class="p">:</span> <span class="mi">25</span><span class="p">,</span> <span class="mi">24</span><span class="p">:</span> <span class="mi">27</span><span class="p">,</span> <span class="mi">25</span><span class="p">:</span> <span class="mi">28</span><span class="p">,</span> <span class="mi">26</span><span class="p">:</span> <span class="mi">31</span><span class="p">,</span>
<span class="mi">27</span><span class="p">:</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">28</span><span class="p">:</span> <span class="mi">33</span><span class="p">,</span> <span class="mi">29</span><span class="p">:</span> <span class="mi">34</span><span class="p">,</span> <span class="mi">30</span><span class="p">:</span> <span class="mi">35</span><span class="p">,</span> <span class="mi">31</span><span class="p">:</span> <span class="mi">36</span><span class="p">,</span> <span class="mi">32</span><span class="p">:</span> <span class="mi">37</span><span class="p">,</span> <span class="mi">33</span><span class="p">:</span> <span class="mi">38</span><span class="p">,</span> <span class="mi">34</span><span class="p">:</span> <span class="mi">39</span><span class="p">,</span> <span class="mi">35</span><span class="p">:</span> <span class="mi">40</span><span class="p">,</span> <span class="mi">36</span><span class="p">:</span> <span class="mi">41</span><span class="p">,</span> <span class="mi">37</span><span class="p">:</span> <span class="mi">42</span><span class="p">,</span> <span class="mi">38</span><span class="p">:</span> <span class="mi">43</span><span class="p">,</span>
<span class="mi">39</span><span class="p">:</span> <span class="mi">44</span><span class="p">,</span> <span class="mi">40</span><span class="p">:</span> <span class="mi">46</span><span class="p">,</span> <span class="mi">41</span><span class="p">:</span> <span class="mi">47</span><span class="p">,</span> <span class="mi">42</span><span class="p">:</span> <span class="mi">48</span><span class="p">,</span> <span class="mi">43</span><span class="p">:</span> <span class="mi">49</span><span class="p">,</span> <span class="mi">44</span><span class="p">:</span> <span class="mi">50</span><span class="p">,</span> <span class="mi">45</span><span class="p">:</span> <span class="mi">51</span><span class="p">,</span> <span class="mi">46</span><span class="p">:</span> <span class="mi">52</span><span class="p">,</span> <span class="mi">47</span><span class="p">:</span> <span class="mi">53</span><span class="p">,</span> <span class="mi">48</span><span class="p">:</span> <span class="mi">54</span><span class="p">,</span> <span class="mi">49</span><span class="p">:</span> <span class="mi">55</span><span class="p">,</span> <span class="mi">50</span><span class="p">:</span> <span class="mi">56</span><span class="p">,</span>
<span class="mi">51</span><span class="p">:</span> <span class="mi">57</span><span class="p">,</span> <span class="mi">52</span><span class="p">:</span> <span class="mi">58</span><span class="p">,</span> <span class="mi">53</span><span class="p">:</span> <span class="mi">59</span><span class="p">,</span> <span class="mi">54</span><span class="p">:</span> <span class="mi">60</span><span class="p">,</span> <span class="mi">55</span><span class="p">:</span> <span class="mi">61</span><span class="p">,</span> <span class="mi">56</span><span class="p">:</span> <span class="mi">62</span><span class="p">,</span> <span class="mi">57</span><span class="p">:</span> <span class="mi">63</span><span class="p">,</span> <span class="mi">58</span><span class="p">:</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">59</span><span class="p">:</span> <span class="mi">65</span><span class="p">,</span> <span class="mi">60</span><span class="p">:</span> <span class="mi">67</span><span class="p">,</span> <span class="mi">61</span><span class="p">:</span> <span class="mi">70</span><span class="p">,</span> <span class="mi">62</span><span class="p">:</span> <span class="mi">72</span><span class="p">,</span>
<span class="mi">63</span><span class="p">:</span> <span class="mi">73</span><span class="p">,</span> <span class="mi">64</span><span class="p">:</span> <span class="mi">74</span><span class="p">,</span> <span class="mi">65</span><span class="p">:</span> <span class="mi">75</span><span class="p">,</span> <span class="mi">66</span><span class="p">:</span> <span class="mi">76</span><span class="p">,</span> <span class="mi">67</span><span class="p">:</span> <span class="mi">77</span><span class="p">,</span> <span class="mi">68</span><span class="p">:</span> <span class="mi">78</span><span class="p">,</span> <span class="mi">69</span><span class="p">:</span> <span class="mi">79</span><span class="p">,</span> <span class="mi">70</span><span class="p">:</span> <span class="mi">80</span><span class="p">,</span> <span class="mi">71</span><span class="p">:</span> <span class="mi">81</span><span class="p">,</span> <span class="mi">72</span><span class="p">:</span> <span class="mi">82</span><span class="p">,</span> <span class="mi">73</span><span class="p">:</span> <span class="mi">84</span><span class="p">,</span> <span class="mi">74</span><span class="p">:</span> <span class="mi">85</span><span class="p">,</span>
<span class="mi">75</span><span class="p">:</span> <span class="mi">86</span><span class="p">,</span> <span class="mi">76</span><span class="p">:</span> <span class="mi">87</span><span class="p">,</span> <span class="mi">77</span><span class="p">:</span> <span class="mi">88</span><span class="p">,</span> <span class="mi">78</span><span class="p">:</span> <span class="mi">89</span><span class="p">,</span> <span class="mi">79</span><span class="p">:</span> <span class="mi">90</span><span class="p">}</span>
<span class="n">eval_batch_size</span> <span class="o">=</span> <span class="mi">8</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">val_annotate</span><span class="p">,</span> <span class="s1">'r'</span><span class="p">,</span> <span class="n">encoding</span><span class="o">=</span><span class="s1">'utf-8'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f2</span><span class="p">:</span>
<span class="k">for</span> <span class="n">line</span> <span class="ow">in</span> <span class="n">f2</span><span class="p">:</span>
<span class="n">line</span> <span class="o">=</span> <span class="n">line</span><span class="o">.</span><span class="n">strip</span><span class="p">()</span>
<span class="n">dataset</span> <span class="o">=</span> <span class="n">json</span><span class="o">.</span><span class="n">loads</span><span class="p">(</span><span class="n">line</span><span class="p">)</span>
<span class="n">images</span> <span class="o">=</span> <span class="n">dataset</span><span class="p">[</span><span class="s1">'images'</span><span class="p">]</span>
<span class="n">box_ap</span> <span class="o">=</span> <span class="n">evaluate</span><span class="p">(</span><span class="n">yolo_pred</span><span class="p">,</span> <span class="n">images</span><span class="p">,</span> <span class="n">val_coco_root</span><span class="p">,</span> <span class="n">val_annotate</span><span class="p">,</span> <span class="n">eval_batch_size</span><span class="p">,</span> <span class="n">clsid2catid</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/yolo_v4_demo/yolo_v4_demo.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Working with YOLO v4 using AWS Neuron SDK</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Running SSD300 with AWS Neuron</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:52.153Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.rst.txt
|
```
.. _tensorflow-ref-neuron-analyze_model-api:
TensorFlow 2.x (``tensorflow-neuron``) analyze_model API
========================================================
Method
------
``tensorflow.neuron.analyze_model``
Description
-----------
Analyzes a ``keras.Model`` or a Python callable that can be decorated by
``tf.function`` for it's compatibility with Neuron. It displays supported
vs. unsupported operators in the model as well as percentages and counts of
each operator and returns a dictionary with operator statistics.
Arguments
---------
- **func:** The ``keras.Model`` or function to be analyzed.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
Returns
-------
- A results ``dict`` with these keys: ``'percent_supported', 'supported_count',
'total_count', 'supported_operators', 'unsupported_operators', 'operators',
'operator_count'``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
results = tfn.analyze_model(model, example_inputs)
print(results)
# expected output
'''
BiasAdd
MatMul
100.00% of all operations (2 of 2) are supported
{'percent_supported': 100.0, 'supported_count': 2, 'total_count': 2,
'supported_operators': {'BiasAdd', 'MatMul'}, 'unsupported_operators': [],
'operators': ['BiasAdd', 'MatMul'], 'operator_count': {'MatMul': 1, 'BiasAdd': 1}}
'''
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-ref-neuron-analyze_model-api:
TensorFlow 2.x (``tensorflow-neuron``) analyze_model API
========================================================
Method
------
``tensorflow.neuron.analyze_model``
Description
-----------
Analyzes a ``keras.Model`` or a Python callable that can be decorated by
``tf.function`` for it's compatibility with Neuron. It displays supported
vs. unsupported operators in the model as well as percentages and counts of
each operator and returns a dictionary with operator statistics.
Arguments
---------
- **func:** The ``keras.Model`` or function to be analyzed.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
Returns
-------
- A results ``dict`` with these keys: ``'percent_supported', 'supported_count',
'total_count', 'supported_operators', 'unsupported_operators', 'operators',
'operator_count'``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
results = tfn.analyze_model(model, example_inputs)
print(results)
# expected output
'''
BiasAdd
MatMul
100.00% of all operations (2 of 2) are supported
{'percent_supported': 100.0, 'supported_count': 2, 'total_count': 2,
'supported_operators': {'BiasAdd', 'MatMul'}, 'unsupported_operators': [],
'operators': ['BiasAdd', 'MatMul'], 'operator_count': {'MatMul': 1, 'BiasAdd': 1}}
'''</pre></body></html>
|
2023-09-29T20:54:52.263Z
|
|
Running SSD300 with AWS Neuron — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.html#tensorflow-ssd300
|
# Running SSD300 with AWS Neuron — AWS Neuron Documentation
Toggle in-page Table of Contents
Contents
- [Table of Contents](#table-of-contents)
- [Launch EC2 instances and update tensorflow-neuron and neuron-cc](#launch-ec2-instances-and-update-tensorflow-neuron-and-neuron-cc)
- [Generating Neuron compatible SSD300 TensorFlow SavedModel](#generating-neuron-compatible-ssd300-tensorflow-savedmodel)
- [Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel](#compile-open-source-pytorch-ssd300-model-and-checkpoint-into-neuron-compatible-ssd300-tensorflow-savedmodel)
## Contents
- [Table of Contents](#table-of-contents)
- [Launch EC2 instances and update tensorflow-neuron and neuron-cc](#launch-ec2-instances-and-update-tensorflow-neuron-and-neuron-cc)
- [Generating Neuron compatible SSD300 TensorFlow SavedModel](#generating-neuron-compatible-ssd300-tensorflow-savedmodel)
- [Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel](#compile-open-source-pytorch-ssd300-model-and-checkpoint-into-neuron-compatible-ssd300-tensorflow-savedmodel)
_This document is relevant for_: `Inf1`
## Running SSD300 with AWS Neuron[#](#running-ssd300-with-aws-neuron "Permalink to this headline")
_Update 11/16: The model checkpoint link_[https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt\_fp32/versions/1/files/nvidia\_ssdpyt\_fp32\_20190225.pt](https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt)_is currently broken and the AWS Neuron team is working on providing an alternative source._
This demo shows a Neuron compatible SSD300 implementation that is functionally equivalent to open source SSD300 model. This demo uses TensorFlow-Neuron, PyTorch SSD300 model and checkpoint ([https://pytorch.org/hub/nvidia\_deeplearningexamples\_ssd/](https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/)) and also shows the performance achieved by the Inf1 instance.
## Table of Contents[#](#table-of-contents "Permalink to this headline")
1. Launch EC2 instance and update AWS Neuron SDK software
2. Generating Neuron compatible SSD300 TensorFlow SavedModel
- Convert open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel
3. Evaluate the generated SSD300 TensorFlow SavedModel for both accuracy and performance
- Running threaded inference through the COCO 2017 validation dataset
## Launch EC2 instances and update tensorflow-neuron and neuron-cc[#](#launch-ec2-instances-and-update-tensorflow-neuron-and-neuron-cc "Permalink to this headline")
For this demo, launch one inf1.xlarge EC2 instance. We recommend using the latest Ubuntu 18 Deep Learning AMI (DLAMI).
Please configure your ubuntu16/ubuntu18/yum repo following the steps in the [Install TensorFlow Neuron](../../setup/tensorflow-install.html#install-neuron-tensorflow) in order to install `tensorflow-model-server-neuron`.
## Generating Neuron compatible SSD300 TensorFlow SavedModel[#](#generating-neuron-compatible-ssd300-tensorflow-savedmodel "Permalink to this headline")
First connect to your inf1.xlarge instance
### Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel[#](#compile-open-source-pytorch-ssd300-model-and-checkpoint-into-neuron-compatible-ssd300-tensorflow-savedmodel "Permalink to this headline")
In the same directory ssd300\_demo, run the following:
1. Create venv and install dependencies
```
sudo apt update
sudo apt install g++ python3-dev python3-venv unzip
sudo apt install tensorflow-model-server-neuron
python3 -m venv env
source ./env/bin/activate
pip install pip setuptools --upgrade
pip install -r ./requirements.txt --extra-index-url=https://pip.repos.neuron.amazonaws.com
```
2. Clone NVIDIA’s DeepLearningExamples repo that contains PyTorch SSD300.
```
git clone https://github.com/NVIDIA/DeepLearningExamples.git
cd DeepLearningExamples
git checkout a644350589f9abc91b203f73e686a50f5d6f3e96
cd ..
```
3. Download PyTorch SSD300 checkpoint file.
```
curl -LO https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt
```
4. Download COCO 2017 validation set and annotations.
```
curl -LO http://images.cocodataset.org/zips/val2017.zip
unzip ./val2017.zip
curl -LO http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip ./annotations_trainval2017.zip
```
5. Convert PyTorch SSD300 model and checkpoint into a Neuron-compatible TensorFlow SavedModel.
```
python ssd300_model.py --torch_checkpoint=./nvidia_ssdpyt_fp32_20190225.pt --output_saved_model=./ssd300_tf_neuron/1
```
This converts PyTorch SSD300 model and checkpoint to a Neuron-compatible TensorFlow SavedModel using tensorflow-neuron and neuron-cc. The compilation output is stored in `./ssd300_tf_neuron`.
6. Launch the `tensorflow-model-server-neuron` gRPC server at default port 8500 in the background.
```
tensorflow_model_server_neuron --model_base_path=$(pwd)/ssd300_tf_neuron &
```
7. In client, evaluate the Neuron-compatible TensorFlow SavedModel for both accuracy and performance. Note that this client by default assumes a `tensorflow-model-server-neuron` listening at `localhost:8500`. On inf1.xlarge, the expected throughput is 100 images/second once the server is fully warmed up, and the expected mean average precision (mAP) is 0.253.
```
python ssd300_evaluation_client.py --val2017=./val2017 --instances_val2017_json=./annotations/instances_val2017.json
```
8. After running the demo, please cleanup resources allocated in Neuron runtime by gracefully killing the `tensorflow_model_server_neuron` process, e. g.,
```
killall tensorflow_model_server_neuron
```
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Running SSD300 with AWS Neuron — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../../" id="documentation_options" src="../../../../../_static/documentation_options.js"></script>
<script src="../../../../../_static/jquery.js"></script>
<script src="../../../../../_static/underscore.js"></script>
<script src="../../../../../_static/doctools.js"></script>
<script src="../../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../../_static/contentui.js"></script>
<script src="../../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../../genindex.html">
<link rel="search" title="Search" href="../../../../../search.html">
<link rel="next" title="Tensorflow ResNet 50 Optimization Tutorial" href="../../../../../src/examples/tensorflow/keras_resnet50/keras_resnet50.html">
<link rel="prev" title="Evaluate YOLO v3 on Inferentia" href="../../../../../src/examples/tensorflow/yolo_v3_demo/yolo_v3.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4 current active">
<a class="reference internal" href="../tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../../_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/ssd300_demo/ssd300_demo.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#table-of-contents">
Table of Contents
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#launch-ec2-instances-and-update-tensorflow-neuron-and-neuron-cc">
Launch EC2 instances and update tensorflow-neuron and neuron-cc
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#generating-neuron-compatible-ssd300-tensorflow-savedmodel">
Generating Neuron compatible SSD300 TensorFlow SavedModel
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#compile-open-source-pytorch-ssd300-model-and-checkpoint-into-neuron-compatible-ssd300-tensorflow-savedmodel">
Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Running SSD300 with AWS Neuron</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#table-of-contents">
Table of Contents
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#launch-ec2-instances-and-update-tensorflow-neuron-and-neuron-cc">
Launch EC2 instances and update tensorflow-neuron and neuron-cc
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#generating-neuron-compatible-ssd300-tensorflow-savedmodel">
Generating Neuron compatible SSD300 TensorFlow SavedModel
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#compile-open-source-pytorch-ssd300-model-and-checkpoint-into-neuron-compatible-ssd300-tensorflow-savedmodel">
Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="running-ssd300-with-aws-neuron">
<span id="tensorflow-ssd300"></span><h1>Running SSD300 with AWS Neuron<a class="headerlink" href="#running-ssd300-with-aws-neuron" title="Permalink to this headline">#</a></h1>
<p><em>Update 11/16: The model checkpoint
link</em><a class="reference external" href="https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt">https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt</a><em>is
currently broken and the AWS Neuron team is working on providing an
alternative source.</em></p>
<p>This demo shows a Neuron compatible SSD300 implementation that is
functionally equivalent to open source SSD300 model. This demo uses
TensorFlow-Neuron, PyTorch SSD300 model and checkpoint
(<a class="reference external" href="https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/">https://pytorch.org/hub/nvidia_deeplearningexamples_ssd/</a>) and also
shows the performance achieved by the Inf1 instance.</p>
<div class="section" id="table-of-contents">
<h2>Table of Contents<a class="headerlink" href="#table-of-contents" title="Permalink to this headline">#</a></h2>
<ol class="arabic simple">
<li><p>Launch EC2 instance and update AWS Neuron SDK software</p></li>
<li><p>Generating Neuron compatible SSD300 TensorFlow SavedModel</p>
<ul class="simple">
<li><p>Convert open source PyTorch SSD300 model and checkpoint into
Neuron compatible SSD300 TensorFlow SavedModel</p></li>
</ul>
</li>
<li><p>Evaluate the generated SSD300 TensorFlow SavedModel for both accuracy
and performance</p>
<ul class="simple">
<li><p>Running threaded inference through the COCO 2017 validation
dataset</p></li>
</ul>
</li>
</ol>
</div>
<div class="section" id="launch-ec2-instances-and-update-tensorflow-neuron-and-neuron-cc">
<h2>Launch EC2 instances and update tensorflow-neuron and neuron-cc<a class="headerlink" href="#launch-ec2-instances-and-update-tensorflow-neuron-and-neuron-cc" title="Permalink to this headline">#</a></h2>
<p>For this demo, launch one inf1.xlarge EC2 instance. We recommend using
the latest Ubuntu 18 Deep Learning AMI (DLAMI).</p>
<p>Please configure your ubuntu16/ubuntu18/yum repo following the steps in
the <a class="reference internal" href="../../setup/tensorflow-install.html#install-neuron-tensorflow"><span class="std std-ref">Install TensorFlow Neuron</span></a> in order to install
<code class="docutils literal notranslate"><span class="pre">tensorflow-model-server-neuron</span></code>.</p>
</div>
<div class="section" id="generating-neuron-compatible-ssd300-tensorflow-savedmodel">
<h2>Generating Neuron compatible SSD300 TensorFlow SavedModel<a class="headerlink" href="#generating-neuron-compatible-ssd300-tensorflow-savedmodel" title="Permalink to this headline">#</a></h2>
<p>First connect to your inf1.xlarge instance</p>
<div class="section" id="compile-open-source-pytorch-ssd300-model-and-checkpoint-into-neuron-compatible-ssd300-tensorflow-savedmodel">
<h3>Compile open source PyTorch SSD300 model and checkpoint into Neuron compatible SSD300 TensorFlow SavedModel<a class="headerlink" href="#compile-open-source-pytorch-ssd300-model-and-checkpoint-into-neuron-compatible-ssd300-tensorflow-savedmodel" title="Permalink to this headline">#</a></h3>
<p>In the same directory ssd300_demo, run the following:</p>
<ol class="arabic simple">
<li><p>Create venv and install dependencies</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>apt<span class="w"> </span>update
sudo<span class="w"> </span>apt<span class="w"> </span>install<span class="w"> </span>g++<span class="w"> </span>python3-dev<span class="w"> </span>python3-venv<span class="w"> </span>unzip
sudo<span class="w"> </span>apt<span class="w"> </span>install<span class="w"> </span>tensorflow-model-server-neuron
python3<span class="w"> </span>-m<span class="w"> </span>venv<span class="w"> </span>env
<span class="nb">source</span><span class="w"> </span>./env/bin/activate
pip<span class="w"> </span>install<span class="w"> </span>pip<span class="w"> </span>setuptools<span class="w"> </span>--upgrade
pip<span class="w"> </span>install<span class="w"> </span>-r<span class="w"> </span>./requirements.txt<span class="w"> </span>--extra-index-url<span class="o">=</span>https://pip.repos.neuron.amazonaws.com
</pre></div>
</div>
<ol class="arabic simple" start="2">
<li><p>Clone NVIDIA’s DeepLearningExamples repo that contains PyTorch
SSD300.</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>git<span class="w"> </span>clone<span class="w"> </span>https://github.com/NVIDIA/DeepLearningExamples.git
<span class="nb">cd</span><span class="w"> </span>DeepLearningExamples
git<span class="w"> </span>checkout<span class="w"> </span>a644350589f9abc91b203f73e686a50f5d6f3e96
<span class="nb">cd</span><span class="w"> </span>..
</pre></div>
</div>
<ol class="arabic simple" start="3">
<li><p>Download PyTorch SSD300 checkpoint file.</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>curl<span class="w"> </span>-LO<span class="w"> </span>https://api.ngc.nvidia.com/v2/models/nvidia/ssdpyt_fp32/versions/1/files/nvidia_ssdpyt_fp32_20190225.pt
</pre></div>
</div>
<ol class="arabic simple" start="4">
<li><p>Download COCO 2017 validation set and annotations.</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>curl<span class="w"> </span>-LO<span class="w"> </span>http://images.cocodataset.org/zips/val2017.zip
unzip<span class="w"> </span>./val2017.zip
curl<span class="w"> </span>-LO<span class="w"> </span>http://images.cocodataset.org/annotations/annotations_trainval2017.zip
unzip<span class="w"> </span>./annotations_trainval2017.zip
</pre></div>
</div>
<ol class="arabic simple" start="5">
<li><p>Convert PyTorch SSD300 model and checkpoint into a Neuron-compatible
TensorFlow SavedModel.</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>python<span class="w"> </span>ssd300_model.py<span class="w"> </span>--torch_checkpoint<span class="o">=</span>./nvidia_ssdpyt_fp32_20190225.pt<span class="w"> </span>--output_saved_model<span class="o">=</span>./ssd300_tf_neuron/1
</pre></div>
</div>
<p>This converts PyTorch SSD300 model and checkpoint to a Neuron-compatible
TensorFlow SavedModel using tensorflow-neuron and neuron-cc. The
compilation output is stored in <code class="docutils literal notranslate"><span class="pre">./ssd300_tf_neuron</span></code>.</p>
<ol class="arabic simple" start="6">
<li><p>Launch the <code class="docutils literal notranslate"><span class="pre">tensorflow-model-server-neuron</span></code> gRPC server at default
port 8500 in the background.</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>tensorflow_model_server_neuron<span class="w"> </span>--model_base_path<span class="o">=</span><span class="k">$(</span><span class="nb">pwd</span><span class="k">)</span>/ssd300_tf_neuron<span class="w"> </span><span class="p">&</span>
</pre></div>
</div>
<ol class="arabic simple" start="7">
<li><p>In client, evaluate the Neuron-compatible TensorFlow SavedModel for
both accuracy and performance. Note that this client by default
assumes a <code class="docutils literal notranslate"><span class="pre">tensorflow-model-server-neuron</span></code> listening at
<code class="docutils literal notranslate"><span class="pre">localhost:8500</span></code>. On inf1.xlarge, the expected throughput is 100
images/second once the server is fully warmed up, and the expected
mean average precision (mAP) is 0.253.</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>python<span class="w"> </span>ssd300_evaluation_client.py<span class="w"> </span>--val2017<span class="o">=</span>./val2017<span class="w"> </span>--instances_val2017_json<span class="o">=</span>./annotations/instances_val2017.json
</pre></div>
</div>
<ol class="arabic simple" start="8">
<li><p>After running the demo, please cleanup resources allocated in Neuron
runtime by gracefully killing the <code class="docutils literal notranslate"><span class="pre">tensorflow_model_server_neuron</span></code>
process, e. g.,</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>killall<span class="w"> </span>tensorflow_model_server_neuron
</pre></div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../../src/examples/tensorflow/yolo_v3_demo/yolo_v3.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Evaluate YOLO v3 on Inferentia</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../../src/examples/tensorflow/keras_resnet50/keras_resnet50.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Tensorflow ResNet 50 Optimization Tutorial</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:52.385Z
|
Running TensorFlow BERT-Large with AWS Neuron — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo.html#tensorflow-bert-demo
|
# Running TensorFlow BERT-Large with AWS Neuron — AWS Neuron Documentation
_This document is relevant for_: `Inf1`
## Running TensorFlow BERT-Large with AWS Neuron[#](#running-tensorflow-bert-large-with-aws-neuron "Permalink to this headline")
This example shows a Neuron compatible BERT-Large implementation that is functionally equivalent to open source BERT-Large model. This demo uses TensorFlow-Neuron, BERT-Large weights fine tuned for MRPC and also shows the performance achieved by the Inf1 instance. For users who want to use public BERT SavedModels please also follow the steps described [Using public BERT SavedModels](#using-public-bert-savedmodels).
## Launch EC2 instances[#](#launch-ec2-instances "Permalink to this headline")
For this demo, launch two EC2 instances :
- a c5.4xlarge instance for compiling the BERT-Large Model and
- an inf1.xlarge instance for running inference
For both of these instances choose the latest Ubuntu 18 Deep Learning AMI (DLAMI).
## Compiling Neuron compatible BERT-Large[#](#compiling-neuron-compatible-bert-large "Permalink to this headline")
First connect to a c5.4xlarge instance and update tensorflow-neuron and neuron-cc
### Update compilation EC2 instance[#](#update-compilation-ec2-instance "Permalink to this headline")
Update to the latest neuron software by executing the instructions at [Install TensorFlow Neuron](../../setup/tensorflow-install.html#install-neuron-tensorflow).
Note: if your tensorflow-neuron version on the inference instance is lower than 1.15.0.1.0.1333.0, you will need to run this demo on inf1.2xlarge instead of inf1.xlarge.
### Compile open source BERT-Large saved model using Neuron compatible BERT-Large implementation[#](#compile-open-source-bert-large-saved-model-using-neuron-compatible-bert-large-implementation "Permalink to this headline")
Neuron software works with TensorFlow saved models. Users should bring their own BERT-Large saved model for this section. This demo will run inference for the MRPC task and the saved model should be fine tuned for MRPC. Users who need additional help to fine-tune the model for MRPC or to create a saved model can refer to [Appendix 1](#bert-tensorflow-demo-appendix1).
In the same environment and directory bert\_demo scripts, run the following :
```
git clone https://github.com/aws/aws-neuron-sdk
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
export BERT_LARGE_SAVED_MODEL="/path/to/user/bert-large/savedmodel"
python bert_model.py --input_saved_model $BERT_LARGE_SAVED_MODEL --output_saved_model ./bert-saved-model-neuron --batch_size=6 --aggressive_optimizations
```
This compiles BERT-Large pointed to by $BERT\_LARGE\_SAVED\_MODEL for an input size of 128 and batch size of 6. The compilation output is stored in bert-saved-model-neuron. Copy this to your Inf1 instance for inferencing.
The bert\_model.py script encapsulates all the steps necessary for this process. For details on what is done by bert\_model.py please refer to [Appendix 2](#bert-tensorflow-demo-appendix2).
## Running the inference demo[#](#running-the-inference-demo "Permalink to this headline")
Connect to your inf1.xlarge instance and update tensorflow-neuron, aws-neuron-runtime and aws-neuron-tools.
### Update inference EC2 instance[#](#update-inference-ec2-instance "Permalink to this headline")
Update to the latest neuron software by executing the instructions at [Install TensorFlow Neuron](../../setup/tensorflow-install.html#install-neuron-tensorflow).
### Launching the BERT-Large demo server[#](#launching-the-bert-large-demo-server "Permalink to this headline")
Copy the compiled model (bert-saved-model-neuron) from your c5.4xlarge to your inf1.xlarge instance. Place the model in the same directory as the bert\_demo scripts. Then from the same conda environment launch the BERT-Large demo server :
```
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python bert_server.py --dir bert-saved-model-neuron --batch 6 --parallel 4
```
This loads 4 BERT-Large models, one into each of the 4 NeuronCores found in an inf1.xlarge instance. For each of the 4 models, the BERT-Large demo server opportunistically stitches together asynchronous requests into batch 6 requests. When there are insufficient pending requests, the server creates dummy requests for batching.
Wait for the bert\_server to finish loading the BERT-Large models to Inferentia memory. When it is ready to accept requests it will print the inferences per second once every second. This reflects the number of real inferences only. Dummy requests created for batching are not credited to inferentia performance. Once the inferences are done you can send a keyboard interrupt to print out the average throughput of your run.
### Sending requests to server from multiple clients[#](#sending-requests-to-server-from-multiple-clients "Permalink to this headline")
Wait until the bert demo server is ready to accept requests. Then on the same inf1.xlarge instance, launch a separate linux terminal. From the bert\_demo directory execute the following commands :
```
source activate aws_neuron_tensorflow_p36
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
for i in {1..96}; do python bert_client.py --cycle 128 & done
```
This spins up 96 clients, each of which sends 128 inference requests.
### Printing latency metrics[#](#printing-latency-metrics "Permalink to this headline")
After all your requests have been sent to your server you can run the following command:
```
python latency_printer.py
```
## Using public BERT SavedModels[#](#using-public-bert-savedmodels "Permalink to this headline")
We are now providing a compilation script that has better compatibility with various flavors of BERT SavedModels generated from [https://github.com/google-research/bert](https://github.com/google-research/bert). Here are the current limitations:
1. You did not change [modeling.py](https://github.com/google-research/bert/blob/master/modeling.py)
2. BERT SavedModel is generated using `estimator.export_saved_model`
3. BERT SavedModel uses fixed sequence length 128 (you may check by `saved_model_cli show --dir /path/to/user/bert/savedmodel --all`)
4. `neuron-cc` version is at least 1.0.12000.0
5. `aws-neuron-runtime` version is at least 1.0.7000.0
6. The `--batch_size` argument specified in this script is at most 4
Example usage is shown below:
```
export BERT_LARGE_SAVED_MODEL="/path/to/user/bert-large/savedmodel"
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python bert_no_model.py --input_saved_model $BERT_LARGE_SAVED_MODEL --output_saved_model ./bert-saved-model-neuron --batch_size=1
```
## Appendix 1[#](#appendix-1 "Permalink to this headline")
Users who need help finetuning BERT-Large for MRPC and creating a saved model may follow the instructions here.
Connect to the c5.4xlarge compilation EC2 instance you started above and download these three items :
1. clone [this](https://github.com/google-research/bert) github repo.
2. download GLUE data as described [here](https://github.com/google-research/bert#user-content-sentence-and-sentence-pair-classification-tasks). Do not run the finetuning command.
3. download a desired pre-trained BERT-Large checkpoint from [here](https://github.com/google-research/bert#user-content-pre-trained-models). This is the model we will fine tune.
Next edit run\_classifier.py in the cloned bert repo to apply the patch described in the following git diff.
```
diff --git a/run_classifier.py b/run_classifier.py
index 817b147..c9426bc 100644
--- a/run_classifier.py
+++ b/run_classifier.py
@@ -955,6 +955,18 @@ def main(_):
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
+ features = {
+ "input_ids": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='input_ids'),
+ "input_mask": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='input_mask'),
+ "segment_ids": tf.placeholder(shape=[None, FLAGS.max_seq_length], dtype=tf.int32, name='segment_ids'),
+ "label_ids": tf.placeholder(shape=[None], dtype=tf.int32, name='label_ids'),
+ "is_real_example": tf.placeholder(shape=[None], dtype=tf.int32, name='is_real_example'),
+ }
+ serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(features)
+ estimator._export_to_tpu = False ## !!important to add this
+ estimator.export_saved_model(
+ export_dir_base='./bert_classifier_saved_model',
+ serving_input_receiver_fn=serving_input_fn)
output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
```
NOTE : Users who are interested may refer to this [link](https://github.com/google-research/bert/issues/146#issuecomment-569138476) for additional background information on the patch but it is not necessary for running this demo.
Then from the bert\_demo directory run the following :
```
source activate aws_neuron_tensorflow_p36
cd ~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
export BERT_REPO_DIR="/path/to/cloned/bert/repo/directory"
export GLUE_DIR="/path/to/glue/data/directory"
export BERT_BASE_DIR="/path/to/pre-trained/bert-large/checkpoint/directory"
./tune_save.sh
```
The a saved model will be created in $BERT\_REPO\_DIR/bert-saved-model/_random\_number_/. Where, _random\_number_ is a random number generated for every run. Use this saved model to continue with the rest of the demo.
## Appendix 2[#](#appendix-2 "Permalink to this headline")
For all BERT variants, we currently need to augment the standard Neuron compilation process for performance tuning. In the future, we intend to automate this tuning process. This would allow users to use the standard Neuron compilation process, which requires only a one line change in user source code. The standard compilation process is described [Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia](../../../../../src/examples/mxnet/resnet50/resnet50.html).
The augmented Neuron compilation process is encapsulated by the bert\_model.py script, which performs the following things :
1. Define a Neuron compatible implementation of BERT-Large. For inference, this is functionally equivalent to the open source BERT-Large. The changes needed to create a Neuron compatible BERT-Large implementation is described in [Appendix 3](#bert-tensorflow-demo-appendix3).
2. Extract BERT-Large weights from the open source saved model pointed to by –input\_saved\_model and associates it with the Neuron compatible model
3. Invoke TensorFlow-Neuron to compile the Neuron compatible model for Inferentia using the newly associated weights
4. Finally, the compiled model is saved into the location given by –output\_saved\_model
## Appendix 3[#](#appendix-3 "Permalink to this headline")
The Neuron compatible implementation of BERT-Large is functionally equivalent to the open source version when used for inference. However, the detailed implementation does differ and here are the list of changes :
1. Data Type Casting : If the original BERT-Large an FP32 model, bert\_model.py contains manually defined cast operators to enable mixed-precision. FP16 is used for multi-head attention and fully-connected layers, and fp32 everywhere else. This will be automated in a future release.
2. Remove Unused Operators: A model typically contains training operators that are not used in inference, including a subset of the reshape operators. Those operators do not affect inference functionality and have been removed.
3. Reimplementation of Selected Operators : A number of operators (mainly mask operators), has been reimplemented to bypass a known compiler issue. This will be fixed in a planned future release.
4. Manually Partition Embedding Ops to CPU : The embedding portion of BERT-Large has been partitioned manually to a subgraph that is executed on the host CPU, without noticable performance impact. In near future, we plan to implement this through compiler auto-partitioning without the need for user intervention.
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Running TensorFlow BERT-Large with AWS Neuron — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../../" id="documentation_options" src="../../../../../_static/documentation_options.js"></script>
<script src="../../../../../_static/jquery.js"></script>
<script src="../../../../../_static/underscore.js"></script>
<script src="../../../../../_static/doctools.js"></script>
<script src="../../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../../_static/contentui.js"></script>
<script src="../../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../../genindex.html">
<link rel="search" title="Search" href="../../../../../search.html">
<link rel="next" title="Running Huggingface DistilBERT with TensorFlow-Neuron" href="../../../../../src/examples/tensorflow/huggingface_bert/huggingface_bert.html">
<link rel="prev" title="Natural Language Processing (NLP) Tutorials (tensorflow-neuron)" href="../tutorials-tensorflow-nlp.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4">
<a class="reference internal" href="../tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4 current active">
<a class="reference internal" href="../tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../../_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#launch-ec2-instances">
Launch EC2 instances
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#compiling-neuron-compatible-bert-large">
Compiling Neuron compatible BERT-Large
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#update-compilation-ec2-instance">
Update compilation EC2 instance
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#compile-open-source-bert-large-saved-model-using-neuron-compatible-bert-large-implementation">
Compile open source BERT-Large saved model using Neuron compatible BERT-Large implementation
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#running-the-inference-demo">
Running the inference demo
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#update-inference-ec2-instance">
Update inference EC2 instance
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#launching-the-bert-large-demo-server">
Launching the BERT-Large demo server
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#sending-requests-to-server-from-multiple-clients">
Sending requests to server from multiple clients
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#printing-latency-metrics">
Printing latency metrics
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#using-public-bert-savedmodels">
Using public BERT SavedModels
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#appendix-1">
Appendix 1
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#appendix-2">
Appendix 2
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#appendix-3">
Appendix 3
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Running TensorFlow BERT-Large with AWS Neuron</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#launch-ec2-instances">
Launch EC2 instances
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#compiling-neuron-compatible-bert-large">
Compiling Neuron compatible BERT-Large
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#update-compilation-ec2-instance">
Update compilation EC2 instance
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#compile-open-source-bert-large-saved-model-using-neuron-compatible-bert-large-implementation">
Compile open source BERT-Large saved model using Neuron compatible BERT-Large implementation
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#running-the-inference-demo">
Running the inference demo
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#update-inference-ec2-instance">
Update inference EC2 instance
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#launching-the-bert-large-demo-server">
Launching the BERT-Large demo server
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#sending-requests-to-server-from-multiple-clients">
Sending requests to server from multiple clients
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#printing-latency-metrics">
Printing latency metrics
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#using-public-bert-savedmodels">
Using public BERT SavedModels
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#appendix-1">
Appendix 1
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#appendix-2">
Appendix 2
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#appendix-3">
Appendix 3
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="running-tensorflow-bert-large-with-aws-neuron">
<span id="tensorflow-bert-demo"></span><h1>Running TensorFlow BERT-Large with AWS Neuron<a class="headerlink" href="#running-tensorflow-bert-large-with-aws-neuron" title="Permalink to this headline">#</a></h1>
<p>This example shows a Neuron compatible BERT-Large implementation that is
functionally equivalent to open source BERT-Large model. This demo uses
TensorFlow-Neuron, BERT-Large weights fine tuned for MRPC and also shows
the performance achieved by the Inf1 instance. For users who want to use
public BERT SavedModels please also follow the steps described <a class="reference internal" href="#using-public-bert-savedmodels"><span class="std std-ref">Using public BERT SavedModels</span></a>.</p>
<div class="section" id="launch-ec2-instances">
<h2>Launch EC2 instances<a class="headerlink" href="#launch-ec2-instances" title="Permalink to this headline">#</a></h2>
<p>For this demo, launch two EC2 instances :</p>
<ul class="simple">
<li><p>a c5.4xlarge instance for compiling the BERT-Large Model and</p></li>
<li><p>an inf1.xlarge instance for running inference</p></li>
</ul>
<p>For both of these instances choose the latest Ubuntu 18 Deep Learning
AMI (DLAMI).</p>
</div>
<div class="section" id="compiling-neuron-compatible-bert-large">
<span id="id1"></span><h2>Compiling Neuron compatible BERT-Large<a class="headerlink" href="#compiling-neuron-compatible-bert-large" title="Permalink to this headline">#</a></h2>
<p>First connect to a c5.4xlarge instance and update tensorflow-neuron and
neuron-cc</p>
<div class="section" id="update-compilation-ec2-instance">
<h3>Update compilation EC2 instance<a class="headerlink" href="#update-compilation-ec2-instance" title="Permalink to this headline">#</a></h3>
<p>Update to the latest neuron software by executing the instructions at <a class="reference internal" href="../../setup/tensorflow-install.html#install-neuron-tensorflow"><span class="std std-ref">Install TensorFlow Neuron</span></a>.</p>
<p>Note: if your tensorflow-neuron version on the inference instance is
lower than 1.15.0.1.0.1333.0, you will need to run this demo on
inf1.2xlarge instead of inf1.xlarge.</p>
</div>
<div class="section" id="compile-open-source-bert-large-saved-model-using-neuron-compatible-bert-large-implementation">
<h3>Compile open source BERT-Large saved model using Neuron compatible BERT-Large implementation<a class="headerlink" href="#compile-open-source-bert-large-saved-model-using-neuron-compatible-bert-large-implementation" title="Permalink to this headline">#</a></h3>
<p>Neuron software works with TensorFlow saved models. Users should bring
their own BERT-Large saved model for this section. This demo will run
inference for the MRPC task and the saved model should be fine tuned for
MRPC. Users who need additional help to fine-tune the model for MRPC or
to create a saved model can refer to <a class="reference internal" href="#bert-tensorflow-demo-appendix1"><span class="std std-ref">Appendix 1</span></a>.</p>
<p>In the same environment and directory bert_demo scripts, run the
following :</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>git<span class="w"> </span>clone<span class="w"> </span>https://github.com/aws/aws-neuron-sdk
<span class="nb">cd</span><span class="w"> </span>~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
<span class="nb">export</span><span class="w"> </span><span class="nv">BERT_LARGE_SAVED_MODEL</span><span class="o">=</span><span class="s2">"/path/to/user/bert-large/savedmodel"</span>
python<span class="w"> </span>bert_model.py<span class="w"> </span>--input_saved_model<span class="w"> </span><span class="nv">$BERT_LARGE_SAVED_MODEL</span><span class="w"> </span>--output_saved_model<span class="w"> </span>./bert-saved-model-neuron<span class="w"> </span>--batch_size<span class="o">=</span><span class="m">6</span><span class="w"> </span>--aggressive_optimizations
</pre></div>
</div>
<p>This compiles BERT-Large pointed to by $BERT_LARGE_SAVED_MODEL for an
input size of 128 and batch size of 6. The compilation output is stored
in bert-saved-model-neuron. Copy this to your Inf1 instance for
inferencing.</p>
<p>The bert_model.py script encapsulates all the steps necessary for this
process. For details on what is done by bert_model.py please refer to
<a class="reference internal" href="#bert-tensorflow-demo-appendix2"><span class="std std-ref">Appendix 2</span></a>.</p>
</div>
</div>
<div class="section" id="running-the-inference-demo">
<h2>Running the inference demo<a class="headerlink" href="#running-the-inference-demo" title="Permalink to this headline">#</a></h2>
<p>Connect to your inf1.xlarge instance and update tensorflow-neuron,
aws-neuron-runtime and aws-neuron-tools.</p>
<div class="section" id="update-inference-ec2-instance">
<h3>Update inference EC2 instance<a class="headerlink" href="#update-inference-ec2-instance" title="Permalink to this headline">#</a></h3>
<p>Update to the latest neuron software by executing the instructions at <a class="reference internal" href="../../setup/tensorflow-install.html#install-neuron-tensorflow"><span class="std std-ref">Install TensorFlow Neuron</span></a>.</p>
</div>
<div class="section" id="launching-the-bert-large-demo-server">
<h3>Launching the BERT-Large demo server<a class="headerlink" href="#launching-the-bert-large-demo-server" title="Permalink to this headline">#</a></h3>
<p>Copy the compiled model (bert-saved-model-neuron) from your c5.4xlarge
to your inf1.xlarge instance. Place the model in the same directory as
the bert_demo scripts. Then from the same conda environment launch the
BERT-Large demo server :</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">cd</span><span class="w"> </span>~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python<span class="w"> </span>bert_server.py<span class="w"> </span>--dir<span class="w"> </span>bert-saved-model-neuron<span class="w"> </span>--batch<span class="w"> </span><span class="m">6</span><span class="w"> </span>--parallel<span class="w"> </span><span class="m">4</span>
</pre></div>
</div>
<p>This loads 4 BERT-Large models, one into each of the 4 NeuronCores found
in an inf1.xlarge instance. For each of the 4 models, the BERT-Large
demo server opportunistically stitches together asynchronous requests
into batch 6 requests. When there are insufficient pending requests, the
server creates dummy requests for batching.</p>
<p>Wait for the bert_server to finish loading the BERT-Large models to
Inferentia memory. When it is ready to accept requests it will print the
inferences per second once every second. This reflects the number of
real inferences only. Dummy requests created for batching are not
credited to inferentia performance. Once the inferences are done you can send
a keyboard interrupt to print out the average throughput of your run.</p>
</div>
<div class="section" id="sending-requests-to-server-from-multiple-clients">
<h3>Sending requests to server from multiple clients<a class="headerlink" href="#sending-requests-to-server-from-multiple-clients" title="Permalink to this headline">#</a></h3>
<p>Wait until the bert demo server is ready to accept requests. Then on the
same inf1.xlarge instance, launch a separate linux terminal. From the
bert_demo directory execute the following commands :</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">source</span><span class="w"> </span>activate<span class="w"> </span>aws_neuron_tensorflow_p36
<span class="nb">cd</span><span class="w"> </span>~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
<span class="k">for</span><span class="w"> </span>i<span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="o">{</span><span class="m">1</span>..96<span class="o">}</span><span class="p">;</span><span class="w"> </span><span class="k">do</span><span class="w"> </span>python<span class="w"> </span>bert_client.py<span class="w"> </span>--cycle<span class="w"> </span><span class="m">128</span><span class="w"> </span><span class="p">&</span><span class="w"> </span><span class="k">done</span>
</pre></div>
</div>
<p>This spins up 96 clients, each of which sends 128 inference requests.</p>
</div>
<div class="section" id="printing-latency-metrics">
<h3>Printing latency metrics<a class="headerlink" href="#printing-latency-metrics" title="Permalink to this headline">#</a></h3>
<p>After all your requests have been sent to your server you can
run the following command:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>python<span class="w"> </span>latency_printer.py
</pre></div>
</div>
</div>
</div>
<div class="section" id="using-public-bert-savedmodels">
<span id="id2"></span><h2>Using public BERT SavedModels<a class="headerlink" href="#using-public-bert-savedmodels" title="Permalink to this headline">#</a></h2>
<p>We are now providing a compilation script that has better compatibility
with various flavors of BERT SavedModels generated from
<a class="reference external" href="https://github.com/google-research/bert">https://github.com/google-research/bert</a>. Here are the current
limitations:</p>
<ol class="arabic simple">
<li><p>You did not change
<a class="reference external" href="https://github.com/google-research/bert/blob/master/modeling.py">modeling.py</a></p></li>
<li><p>BERT SavedModel is generated using <code class="docutils literal notranslate"><span class="pre">estimator.export_saved_model</span></code></p></li>
<li><p>BERT SavedModel uses fixed sequence length 128 (you may check by
<code class="docutils literal notranslate"><span class="pre">saved_model_cli</span> <span class="pre">show</span> <span class="pre">--dir</span> <span class="pre">/path/to/user/bert/savedmodel</span> <span class="pre">--all</span></code>)</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">neuron-cc</span></code> version is at least 1.0.12000.0</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">aws-neuron-runtime</span></code> version is at least 1.0.7000.0</p></li>
<li><p>The <code class="docutils literal notranslate"><span class="pre">--batch_size</span></code> argument specified in this script is at most 4</p></li>
</ol>
<p>Example usage is shown below:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">export</span><span class="w"> </span><span class="nv">BERT_LARGE_SAVED_MODEL</span><span class="o">=</span><span class="s2">"/path/to/user/bert-large/savedmodel"</span>
<span class="nb">cd</span><span class="w"> </span>~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
python<span class="w"> </span>bert_no_model.py<span class="w"> </span>--input_saved_model<span class="w"> </span><span class="nv">$BERT_LARGE_SAVED_MODEL</span><span class="w"> </span>--output_saved_model<span class="w"> </span>./bert-saved-model-neuron<span class="w"> </span>--batch_size<span class="o">=</span><span class="m">1</span>
</pre></div>
</div>
</div>
<div class="section" id="appendix-1">
<span id="bert-tensorflow-demo-appendix1"></span><h2>Appendix 1<a class="headerlink" href="#appendix-1" title="Permalink to this headline">#</a></h2>
<p>Users who need help finetuning BERT-Large for MRPC and creating a saved
model may follow the instructions here.</p>
<p>Connect to the c5.4xlarge compilation EC2 instance you started above and
download these three items :</p>
<ol class="arabic simple">
<li><p>clone <a class="reference external" href="https://github.com/google-research/bert">this</a> github repo.</p></li>
<li><p>download GLUE data as described
<a class="reference external" href="https://github.com/google-research/bert#user-content-sentence-and-sentence-pair-classification-tasks">here</a>.
Do not run the finetuning command.</p></li>
<li><p>download a desired pre-trained BERT-Large checkpoint from
<a class="reference external" href="https://github.com/google-research/bert#user-content-pre-trained-models">here</a>.
This is the model we will fine tune.</p></li>
</ol>
<p>Next edit run_classifier.py in the cloned bert repo to apply the patch
described in the following git diff.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">diff</span> <span class="o">--</span><span class="n">git</span> <span class="n">a</span><span class="o">/</span><span class="n">run_classifier</span><span class="o">.</span><span class="n">py</span> <span class="n">b</span><span class="o">/</span><span class="n">run_classifier</span><span class="o">.</span><span class="n">py</span>
<span class="n">index</span> <span class="mi">817</span><span class="n">b147</span><span class="o">..</span><span class="n">c9426bc</span> <span class="mi">100644</span>
<span class="o">---</span> <span class="n">a</span><span class="o">/</span><span class="n">run_classifier</span><span class="o">.</span><span class="n">py</span>
<span class="o">+++</span> <span class="n">b</span><span class="o">/</span><span class="n">run_classifier</span><span class="o">.</span><span class="n">py</span>
<span class="o">@@</span> <span class="o">-</span><span class="mi">955</span><span class="p">,</span><span class="mi">6</span> <span class="o">+</span><span class="mi">955</span><span class="p">,</span><span class="mi">18</span> <span class="o">@@</span> <span class="k">def</span> <span class="nf">main</span><span class="p">(</span><span class="n">_</span><span class="p">):</span>
<span class="n">drop_remainder</span><span class="o">=</span><span class="n">predict_drop_remainder</span><span class="p">)</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">estimator</span><span class="o">.</span><span class="n">predict</span><span class="p">(</span><span class="n">input_fn</span><span class="o">=</span><span class="n">predict_input_fn</span><span class="p">)</span>
<span class="o">+</span> <span class="n">features</span> <span class="o">=</span> <span class="p">{</span>
<span class="o">+</span> <span class="s2">"input_ids"</span><span class="p">:</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="kc">None</span><span class="p">,</span> <span class="n">FLAGS</span><span class="o">.</span><span class="n">max_seq_length</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'input_ids'</span><span class="p">),</span>
<span class="o">+</span> <span class="s2">"input_mask"</span><span class="p">:</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="kc">None</span><span class="p">,</span> <span class="n">FLAGS</span><span class="o">.</span><span class="n">max_seq_length</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'input_mask'</span><span class="p">),</span>
<span class="o">+</span> <span class="s2">"segment_ids"</span><span class="p">:</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="kc">None</span><span class="p">,</span> <span class="n">FLAGS</span><span class="o">.</span><span class="n">max_seq_length</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'segment_ids'</span><span class="p">),</span>
<span class="o">+</span> <span class="s2">"label_ids"</span><span class="p">:</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="kc">None</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'label_ids'</span><span class="p">),</span>
<span class="o">+</span> <span class="s2">"is_real_example"</span><span class="p">:</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">[</span><span class="kc">None</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">int32</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="s1">'is_real_example'</span><span class="p">),</span>
<span class="o">+</span> <span class="p">}</span>
<span class="o">+</span> <span class="n">serving_input_fn</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">estimator</span><span class="o">.</span><span class="n">export</span><span class="o">.</span><span class="n">build_raw_serving_input_receiver_fn</span><span class="p">(</span><span class="n">features</span><span class="p">)</span>
<span class="o">+</span> <span class="n">estimator</span><span class="o">.</span><span class="n">_export_to_tpu</span> <span class="o">=</span> <span class="kc">False</span> <span class="c1">## !!important to add this</span>
<span class="o">+</span> <span class="n">estimator</span><span class="o">.</span><span class="n">export_saved_model</span><span class="p">(</span>
<span class="o">+</span> <span class="n">export_dir_base</span><span class="o">=</span><span class="s1">'./bert_classifier_saved_model'</span><span class="p">,</span>
<span class="o">+</span> <span class="n">serving_input_receiver_fn</span><span class="o">=</span><span class="n">serving_input_fn</span><span class="p">)</span>
<span class="n">output_predict_file</span> <span class="o">=</span> <span class="n">os</span><span class="o">.</span><span class="n">path</span><span class="o">.</span><span class="n">join</span><span class="p">(</span><span class="n">FLAGS</span><span class="o">.</span><span class="n">output_dir</span><span class="p">,</span> <span class="s2">"test_results.tsv"</span><span class="p">)</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">gfile</span><span class="o">.</span><span class="n">GFile</span><span class="p">(</span><span class="n">output_predict_file</span><span class="p">,</span> <span class="s2">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">writer</span><span class="p">:</span>
</pre></div>
</div>
<p>NOTE : Users who are interested may refer to this
<a class="reference external" href="https://github.com/google-research/bert/issues/146#issuecomment-569138476">link</a>
for additional background information on the patch but it is not
necessary for running this demo.</p>
<p>Then from the bert_demo directory run the following :</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">source</span><span class="w"> </span>activate<span class="w"> </span>aws_neuron_tensorflow_p36
<span class="nb">cd</span><span class="w"> </span>~/aws-neuron-sdk/src/examples/tensorflow/bert_demo/
<span class="nb">export</span><span class="w"> </span><span class="nv">BERT_REPO_DIR</span><span class="o">=</span><span class="s2">"/path/to/cloned/bert/repo/directory"</span>
<span class="nb">export</span><span class="w"> </span><span class="nv">GLUE_DIR</span><span class="o">=</span><span class="s2">"/path/to/glue/data/directory"</span>
<span class="nb">export</span><span class="w"> </span><span class="nv">BERT_BASE_DIR</span><span class="o">=</span><span class="s2">"/path/to/pre-trained/bert-large/checkpoint/directory"</span>
./tune_save.sh
</pre></div>
</div>
<p>The a saved model will be created in
$BERT_REPO_DIR/bert-saved-model/<em>random_number</em>/. Where, <em>random_number</em>
is a random number generated for every run. Use this saved model to
continue with the rest of the demo.</p>
</div>
<div class="section" id="appendix-2">
<span id="bert-tensorflow-demo-appendix2"></span><h2>Appendix 2<a class="headerlink" href="#appendix-2" title="Permalink to this headline">#</a></h2>
<p>For all BERT variants, we currently need to augment the standard Neuron
compilation process for performance tuning. In the future, we intend to
automate this tuning process. This would allow users to use the standard
Neuron compilation process, which requires only a one line change in
user source code. The standard compilation process is described <a class="reference internal" href="../../../../../src/examples/mxnet/resnet50/resnet50.html"><span class="std std-ref">Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia</span></a>.</p>
<p>The augmented Neuron compilation process is encapsulated by the
bert_model.py script, which performs the following things :</p>
<ol class="arabic simple">
<li><p>Define a Neuron compatible implementation of BERT-Large. For
inference, this is functionally equivalent to the open source
BERT-Large. The changes needed to create a Neuron compatible
BERT-Large implementation is described in <a class="reference internal" href="#bert-tensorflow-demo-appendix3"><span class="std std-ref">Appendix 3</span></a>.</p></li>
<li><p>Extract BERT-Large weights from the open source saved model pointed
to by –input_saved_model and associates it with the Neuron
compatible model</p></li>
<li><p>Invoke TensorFlow-Neuron to compile the Neuron compatible model for
Inferentia using the newly associated weights</p></li>
<li><p>Finally, the compiled model is saved into the location given by
–output_saved_model</p></li>
</ol>
</div>
<div class="section" id="appendix-3">
<span id="bert-tensorflow-demo-appendix3"></span><h2>Appendix 3<a class="headerlink" href="#appendix-3" title="Permalink to this headline">#</a></h2>
<p>The Neuron compatible implementation of BERT-Large is functionally
equivalent to the open source version when used for inference. However,
the detailed implementation does differ and here are the list of changes
:</p>
<ol class="arabic simple">
<li><p>Data Type Casting : If the original BERT-Large an FP32 model,
bert_model.py contains manually defined cast operators to enable
mixed-precision. FP16 is used for multi-head attention and
fully-connected layers, and fp32 everywhere else. This will be
automated in a future release.</p></li>
<li><p>Remove Unused Operators: A model typically contains training
operators that are not used in inference, including a subset of the
reshape operators. Those operators do not affect inference
functionality and have been removed.</p></li>
<li><p>Reimplementation of Selected Operators : A number of operators
(mainly mask operators), has been reimplemented to bypass a known
compiler issue. This will be fixed in a planned future release.</p></li>
<li><p>Manually Partition Embedding Ops to CPU : The embedding portion of
BERT-Large has been partitioned manually to a subgraph that is
executed on the host CPU, without noticable performance impact. In
near future, we plan to implement this through compiler
auto-partitioning without the need for user intervention.</p></li>
</ol>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../tutorials-tensorflow-nlp.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Natural Language Processing (NLP) Tutorials (<code class="docutils literal notranslate"><span class="pre">tensorflow-neuron</span></code>)</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../../src/examples/tensorflow/huggingface_bert/huggingface_bert.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Running Huggingface DistilBERT with TensorFlow-Neuron</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:52.511Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/api-reference-guide.rst.txt
|
```
API Reference Guide (``tensorflow-neuron``)
===========================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api
/frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api
/frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api
/frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api
.. include:: /frameworks/tensorflow/tensorflow-neuron/api-reference-guide.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide (``tensorflow-neuron``)
===========================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api
/frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api
/frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api
/frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api
.. include:: /frameworks/tensorflow/tensorflow-neuron/api-reference-guide.txt</pre></body></html>
|
2023-09-29T20:54:52.531Z
|
|
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/tensorflow_serving_tutorial.html
|
# Using NEURON\_RT\_VISIBLE\_CORES with TensorFlow Serving — AWS Neuron Documentation
Toggle in-page Table of Contents
Contents
- [Install TensorFlow Model Server and Serving API](#install-tensorflow-model-server-and-serving-api)
- [Export and Compile Saved Model](#export-and-compile-saved-model)
- [Serving Saved Model](#serving-saved-model)
- [Generate inference requests to the model server](#generate-inference-requests-to-the-model-server)
## Contents
- [Install TensorFlow Model Server and Serving API](#install-tensorflow-model-server-and-serving-api)
- [Export and Compile Saved Model](#export-and-compile-saved-model)
- [Serving Saved Model](#serving-saved-model)
- [Generate inference requests to the model server](#generate-inference-requests-to-the-model-server)
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
## Using NEURON\_RT\_VISIBLE\_CORES with TensorFlow Serving[#](#using-neuron-rt-visible-cores-with-tensorflow-serving "Permalink to this headline")
TensorFlow serving allows customers to scale-up inference workloads across a network. TensorFlow Neuron Serving uses the same API as normal TensorFlow Serving with two differences: (a) the saved model must be compiled for Inferentia and (b) the entry point is a different binary named `tensorflow_model_server_neuron`. The binary is found at `/usr/local/bin/tensorflow_model_server_neuron` and is pre-installed in the DLAMI or installed with APT/YUM tensorflow-model-server-neuron package.
## Install TensorFlow Model Server and Serving API[#](#install-tensorflow-model-server-and-serving-api "Permalink to this headline")
Follow the steps in the [Install TensorFlow Neuron](../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow).
Then ensure you install using either apt-get or yum. If using TF 1.x, install the appropriate version (see above).:
```
sudo apt-get install tensorflow-model-server-neuron
```
or
```
sudo yum install tensorflow-model-server-neuron
```
Also, you would need TensorFlow Serving API (use –no-deps to prevent installation of regular tensorflow). Depending on the version of Tensorflow you wish to use:
For Tensorflow 1.x:
```
pip install --no-deps tensorflow_serving_api==1.15
```
For Tensorflow 2.x:
```
pip install --no-deps tensorflow_serving_api
```
For the example image preprocessing using Keras preprocessing, the Python Imaging Library Pillow is required:
To workaround h5py issue [https://github.com/aws/aws-neuron-sdk/issues/220](https://github.com/aws/aws-neuron-sdk/issues/220):
## Export and Compile Saved Model[#](#export-and-compile-saved-model "Permalink to this headline")
The following example shows graph construction followed by the addition of Neuron compilation step before exporting to saved model.
For Tensorflow 1.x:
```
import tensorflow as tf
import tensorflow.neuron
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
model = tf.keras.applications.ResNet50(weights='imagenet')
sess = tf.keras.backend.get_session()
inputs = {'input': model.inputs[0]}
outputs = {'output': model.outputs[0]}
# save the model using tf.saved_model.simple_save
modeldir = "./resnet50/1"
tf.saved_model.simple_save(sess, modeldir, inputs, outputs)
# compile the model for Inferentia
neuron_modeldir = "./resnet50_inf1/1"
tf.neuron.saved_model.compile(modeldir, neuron_modeldir, batch_size=1)
```
For Tensorflow 2.x:
```
import tensorflow as tf
import tensorflow.neuron as tfn
import numpy as np
tf.keras.backend.set_learning_phase(0)
tf.keras.backend.set_image_data_format('channels_last')
image_sizes = [224, 224]
model = tf.keras.applications.ResNet50(weights='imagenet')
example_inputs = tf.random.uniform([1, *image_sizes, 3], dtype=tf.float32)
# run the model once to define the forward pass and allow for saving
model_neuron(example_inputs)
model_neuron = tfn.trace(model, example_inputs)
tf.keras.models.save_model(model_neuron, './resnet50_inf1/1')
```
## Serving Saved Model[#](#serving-saved-model "Permalink to this headline")
User can now serve the saved model with the tensorflow\_model\_server\_neuron binary. To utilize multiple NeuronCores, it is recommended to launch multiple tensorflow model servers that listen to the same gRPC port:
```
export NEURON_RT_VISIBLE_CORES=0 # important to set this environment variable before launching model servers
tensorflow_model_server_neuron --model_name=resnet50_inf1 \
--model_base_path=$(pwd)/resnet50_inf1/ --port=8500
#then to run another server on a different neuron core open another
#window and run this, except this time set NEURON_RT_VISIBLE_CORES=1
#you can keep doing this up to the number of Neuron Cores on your machine
export NEURON_RT_VISIBLE_CORES=1
tensorflow_model_server_neuron --model_name=resnet50_inf1 \
--model_base_path=$(pwd)/resnet50_inf1/ --port=8500
```
The compiled model is staged in Inferentia DRAM by the server to prepare for inference.
## Generate inference requests to the model server[#](#generate-inference-requests-to-the-model-server "Permalink to this headline")
Now run inferences via GRPC as shown in the following sample client code:
For Tensorflow 1.x:
```
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow.keras.applications.resnet50 import decode_predictions
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_inf1'
request.inputs['input'].CopyFrom(
tf.contrib.util.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output'])
print(decode_predictions(prediction))
```
For Tensorflow 2.x:
```
import numpy as np
import grpc
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
from tensorflow.keras.applications.resnet50 import decode_predictions
tf.keras.backend.set_image_data_format('channels_last')
if __name__ == '__main__':
channel = grpc.insecure_channel('localhost:8500')
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
img_file = tf.keras.utils.get_file(
"./kitten_small.jpg",
"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg")
img = image.load_img(img_file, target_size=(224, 224))
img_array = preprocess_input(image.img_to_array(img)[None, ...])
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet50_inf1'
request.inputs['input_1'].CopyFrom(
tf.make_tensor_proto(img_array, shape=img_array.shape))
result = stub.Predict(request)
prediction = tf.make_ndarray(result.outputs['output_1'])
print(decode_predictions(prediction))
```
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/tensorflow/tensorflow_serving_tutorial", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/tensorflow/tensorflow_serving_tutorial.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/tensorflow/tensorflow_serving_tutorial.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/src/examples/tensorflow/tensorflow_serving_tutorial.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#install-tensorflow-model-server-and-serving-api">
Install TensorFlow Model Server and Serving API
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#export-and-compile-saved-model">
Export and Compile Saved Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#serving-saved-model">
Serving Saved Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#generate-inference-requests-to-the-model-server">
Generate inference requests to the model server
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#install-tensorflow-model-server-and-serving-api">
Install TensorFlow Model Server and Serving API
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#export-and-compile-saved-model">
Export and Compile Saved Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#serving-saved-model">
Serving Saved Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#generate-inference-requests-to-the-model-server">
Generate inference requests to the model server
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="using-neuron-rt-visible-cores-with-tensorflow-serving">
<span id="tensorflow-serving-neuronrt-visible-cores"></span><h1>Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving<a class="headerlink" href="#using-neuron-rt-visible-cores-with-tensorflow-serving" title="Permalink to this headline">#</a></h1>
<p>TensorFlow serving allows customers to scale-up inference workloads
across a network. TensorFlow Neuron Serving uses the same API as normal
TensorFlow Serving with two differences: (a) the saved model must be
compiled for Inferentia and (b) the entry point is a different binary
named <code class="docutils literal notranslate"><span class="pre">tensorflow_model_server_neuron</span></code>. The binary is found at
<code class="docutils literal notranslate"><span class="pre">/usr/local/bin/tensorflow_model_server_neuron</span></code> and is pre-installed
in the DLAMI or installed with APT/YUM tensorflow-model-server-neuron
package.</p>
<div class="section" id="install-tensorflow-model-server-and-serving-api">
<h2>Install TensorFlow Model Server and Serving API<a class="headerlink" href="#install-tensorflow-model-server-and-serving-api" title="Permalink to this headline">#</a></h2>
<p>Follow the steps in the <a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/setup/tensorflow-install.html#install-neuron-tensorflow"><span class="std std-ref">Install TensorFlow Neuron</span></a>.</p>
<p>Then ensure you install using either apt-get or yum.
If using TF 1.x, install the appropriate version (see above).:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>apt-get<span class="w"> </span>install<span class="w"> </span>tensorflow-model-server-neuron
</pre></div>
</div>
<p>or</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>yum<span class="w"> </span>install<span class="w"> </span>tensorflow-model-server-neuron
</pre></div>
</div>
<p>Also, you would need TensorFlow Serving API (use –no-deps to prevent
installation of regular tensorflow). Depending on the version of Tensorflow
you wish to use:</p>
<p>For Tensorflow 1.x:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip<span class="w"> </span>install<span class="w"> </span>--no-deps<span class="w"> </span><span class="nv">tensorflow_serving_api</span><span class="o">==</span><span class="m">1</span>.15
</pre></div>
</div>
<p>For Tensorflow 2.x:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip<span class="w"> </span>install<span class="w"> </span>--no-deps<span class="w"> </span>tensorflow_serving_api
</pre></div>
</div>
<p>For the example image preprocessing using Keras preprocessing, the
Python Imaging Library Pillow is required:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip<span class="w"> </span>install<span class="w"> </span>pillow
</pre></div>
</div>
<p>To workaround h5py issue <a class="reference external" href="https://github.com/aws/aws-neuron-sdk/issues/220">https://github.com/aws/aws-neuron-sdk/issues/220</a>:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>pip<span class="w"> </span>install<span class="w"> </span><span class="s2">"h5py<3.0.0"</span>
</pre></div>
</div>
</div>
<div class="section" id="export-and-compile-saved-model">
<h2>Export and Compile Saved Model<a class="headerlink" href="#export-and-compile-saved-model" title="Permalink to this headline">#</a></h2>
<p>The following example shows graph construction followed by the addition
of Neuron compilation step before exporting to saved model.</p>
<p>For Tensorflow 1.x:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">tensorflow.neuron</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_learning_phase</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_image_data_format</span><span class="p">(</span><span class="s1">'channels_last'</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">applications</span><span class="o">.</span><span class="n">ResNet50</span><span class="p">(</span><span class="n">weights</span><span class="o">=</span><span class="s1">'imagenet'</span><span class="p">)</span>
<span class="n">sess</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">get_session</span><span class="p">()</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'input'</span><span class="p">:</span> <span class="n">model</span><span class="o">.</span><span class="n">inputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]}</span>
<span class="n">outputs</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'output'</span><span class="p">:</span> <span class="n">model</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]}</span>
<span class="c1"># save the model using tf.saved_model.simple_save</span>
<span class="n">modeldir</span> <span class="o">=</span> <span class="s2">"./resnet50/1"</span>
<span class="n">tf</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">simple_save</span><span class="p">(</span><span class="n">sess</span><span class="p">,</span> <span class="n">modeldir</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">outputs</span><span class="p">)</span>
<span class="c1"># compile the model for Inferentia</span>
<span class="n">neuron_modeldir</span> <span class="o">=</span> <span class="s2">"./resnet50_inf1/1"</span>
<span class="n">tf</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">saved_model</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">modeldir</span><span class="p">,</span> <span class="n">neuron_modeldir</span><span class="p">,</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
</pre></div>
</div>
<p>For Tensorflow 2.x:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">tensorflow.neuron</span> <span class="k">as</span> <span class="nn">tfn</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_learning_phase</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_image_data_format</span><span class="p">(</span><span class="s1">'channels_last'</span><span class="p">)</span>
<span class="n">image_sizes</span> <span class="o">=</span> <span class="p">[</span><span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">]</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">applications</span><span class="o">.</span><span class="n">ResNet50</span><span class="p">(</span><span class="n">weights</span><span class="o">=</span><span class="s1">'imagenet'</span><span class="p">)</span>
<span class="n">example_inputs</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span> <span class="o">*</span><span class="n">image_sizes</span><span class="p">,</span> <span class="mi">3</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
<span class="c1"># run the model once to define the forward pass and allow for saving</span>
<span class="n">model_neuron</span><span class="p">(</span><span class="n">example_inputs</span><span class="p">)</span>
<span class="n">model_neuron</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">example_inputs</span><span class="p">)</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">save_model</span><span class="p">(</span><span class="n">model_neuron</span><span class="p">,</span> <span class="s1">'./resnet50_inf1/1'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="serving-saved-model">
<h2>Serving Saved Model<a class="headerlink" href="#serving-saved-model" title="Permalink to this headline">#</a></h2>
<p>User can now serve the saved model with the
tensorflow_model_server_neuron binary. To utilize multiple NeuronCores,
it is recommended to launch multiple tensorflow model servers that
listen to the same gRPC port:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">export</span><span class="w"> </span><span class="nv">NEURON_RT_VISIBLE_CORES</span><span class="o">=</span><span class="m">0</span><span class="w"> </span><span class="c1"># important to set this environment variable before launching model servers</span>
tensorflow_model_server_neuron<span class="w"> </span>--model_name<span class="o">=</span>resnet50_inf1<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--model_base_path<span class="o">=</span><span class="k">$(</span><span class="nb">pwd</span><span class="k">)</span>/resnet50_inf1/<span class="w"> </span>--port<span class="o">=</span><span class="m">8500</span>
<span class="c1">#then to run another server on a different neuron core open another</span>
<span class="c1">#window and run this, except this time set NEURON_RT_VISIBLE_CORES=1</span>
<span class="c1">#you can keep doing this up to the number of Neuron Cores on your machine</span>
<span class="nb">export</span><span class="w"> </span><span class="nv">NEURON_RT_VISIBLE_CORES</span><span class="o">=</span><span class="m">1</span>
tensorflow_model_server_neuron<span class="w"> </span>--model_name<span class="o">=</span>resnet50_inf1<span class="w"> </span><span class="se">\</span>
<span class="w"> </span>--model_base_path<span class="o">=</span><span class="k">$(</span><span class="nb">pwd</span><span class="k">)</span>/resnet50_inf1/<span class="w"> </span>--port<span class="o">=</span><span class="m">8500</span>
</pre></div>
</div>
<p>The compiled model is staged in Inferentia DRAM by the server to prepare
for inference.</p>
</div>
<div class="section" id="generate-inference-requests-to-the-model-server">
<h2>Generate inference requests to the model server<a class="headerlink" href="#generate-inference-requests-to-the-model-server" title="Permalink to this headline">#</a></h2>
<p>Now run inferences via GRPC as shown in the following sample client
code:</p>
<p>For Tensorflow 1.x:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">grpc</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.preprocessing</span> <span class="kn">import</span> <span class="n">image</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">preprocess_input</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">decode_predictions</span>
<span class="kn">from</span> <span class="nn">tensorflow_serving.apis</span> <span class="kn">import</span> <span class="n">predict_pb2</span>
<span class="kn">from</span> <span class="nn">tensorflow_serving.apis</span> <span class="kn">import</span> <span class="n">prediction_service_pb2_grpc</span>
<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s1">'__main__'</span><span class="p">:</span>
<span class="n">channel</span> <span class="o">=</span> <span class="n">grpc</span><span class="o">.</span><span class="n">insecure_channel</span><span class="p">(</span><span class="s1">'localhost:8500'</span><span class="p">)</span>
<span class="n">stub</span> <span class="o">=</span> <span class="n">prediction_service_pb2_grpc</span><span class="o">.</span><span class="n">PredictionServiceStub</span><span class="p">(</span><span class="n">channel</span><span class="p">)</span>
<span class="n">img_file</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">get_file</span><span class="p">(</span>
<span class="s2">"./kitten_small.jpg"</span><span class="p">,</span>
<span class="s2">"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg"</span><span class="p">)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">image</span><span class="o">.</span><span class="n">load_img</span><span class="p">(</span><span class="n">img_file</span><span class="p">,</span> <span class="n">target_size</span><span class="o">=</span><span class="p">(</span><span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">))</span>
<span class="n">img_array</span> <span class="o">=</span> <span class="n">preprocess_input</span><span class="p">(</span><span class="n">image</span><span class="o">.</span><span class="n">img_to_array</span><span class="p">(</span><span class="n">img</span><span class="p">)[</span><span class="kc">None</span><span class="p">,</span> <span class="o">...</span><span class="p">])</span>
<span class="n">request</span> <span class="o">=</span> <span class="n">predict_pb2</span><span class="o">.</span><span class="n">PredictRequest</span><span class="p">()</span>
<span class="n">request</span><span class="o">.</span><span class="n">model_spec</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="s1">'resnet50_inf1'</span>
<span class="n">request</span><span class="o">.</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input'</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span>
<span class="n">tf</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">util</span><span class="o">.</span><span class="n">make_tensor_proto</span><span class="p">(</span><span class="n">img_array</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="n">img_array</span><span class="o">.</span><span class="n">shape</span><span class="p">))</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">stub</span><span class="o">.</span><span class="n">Predict</span><span class="p">(</span><span class="n">request</span><span class="p">)</span>
<span class="n">prediction</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">make_ndarray</span><span class="p">(</span><span class="n">result</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s1">'output'</span><span class="p">])</span>
<span class="nb">print</span><span class="p">(</span><span class="n">decode_predictions</span><span class="p">(</span><span class="n">prediction</span><span class="p">))</span>
</pre></div>
</div>
<p>For Tensorflow 2.x:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">grpc</span>
<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.preprocessing</span> <span class="kn">import</span> <span class="n">image</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">preprocess_input</span>
<span class="kn">from</span> <span class="nn">tensorflow_serving.apis</span> <span class="kn">import</span> <span class="n">predict_pb2</span>
<span class="kn">from</span> <span class="nn">tensorflow_serving.apis</span> <span class="kn">import</span> <span class="n">prediction_service_pb2_grpc</span>
<span class="kn">from</span> <span class="nn">tensorflow.keras.applications.resnet50</span> <span class="kn">import</span> <span class="n">decode_predictions</span>
<span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">backend</span><span class="o">.</span><span class="n">set_image_data_format</span><span class="p">(</span><span class="s1">'channels_last'</span><span class="p">)</span>
<span class="k">if</span> <span class="vm">__name__</span> <span class="o">==</span> <span class="s1">'__main__'</span><span class="p">:</span>
<span class="n">channel</span> <span class="o">=</span> <span class="n">grpc</span><span class="o">.</span><span class="n">insecure_channel</span><span class="p">(</span><span class="s1">'localhost:8500'</span><span class="p">)</span>
<span class="n">stub</span> <span class="o">=</span> <span class="n">prediction_service_pb2_grpc</span><span class="o">.</span><span class="n">PredictionServiceStub</span><span class="p">(</span><span class="n">channel</span><span class="p">)</span>
<span class="n">img_file</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">utils</span><span class="o">.</span><span class="n">get_file</span><span class="p">(</span>
<span class="s2">"./kitten_small.jpg"</span><span class="p">,</span>
<span class="s2">"https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg"</span><span class="p">)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">image</span><span class="o">.</span><span class="n">load_img</span><span class="p">(</span><span class="n">img_file</span><span class="p">,</span> <span class="n">target_size</span><span class="o">=</span><span class="p">(</span><span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">))</span>
<span class="n">img_array</span> <span class="o">=</span> <span class="n">preprocess_input</span><span class="p">(</span><span class="n">image</span><span class="o">.</span><span class="n">img_to_array</span><span class="p">(</span><span class="n">img</span><span class="p">)[</span><span class="kc">None</span><span class="p">,</span> <span class="o">...</span><span class="p">])</span>
<span class="n">request</span> <span class="o">=</span> <span class="n">predict_pb2</span><span class="o">.</span><span class="n">PredictRequest</span><span class="p">()</span>
<span class="n">request</span><span class="o">.</span><span class="n">model_spec</span><span class="o">.</span><span class="n">name</span> <span class="o">=</span> <span class="s1">'resnet50_inf1'</span>
<span class="n">request</span><span class="o">.</span><span class="n">inputs</span><span class="p">[</span><span class="s1">'input_1'</span><span class="p">]</span><span class="o">.</span><span class="n">CopyFrom</span><span class="p">(</span>
<span class="n">tf</span><span class="o">.</span><span class="n">make_tensor_proto</span><span class="p">(</span><span class="n">img_array</span><span class="p">,</span> <span class="n">shape</span><span class="o">=</span><span class="n">img_array</span><span class="o">.</span><span class="n">shape</span><span class="p">))</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">stub</span><span class="o">.</span><span class="n">Predict</span><span class="p">(</span><span class="n">request</span><span class="p">)</span>
<span class="n">prediction</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">make_ndarray</span><span class="p">(</span><span class="n">result</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="s1">'output_1'</span><span class="p">])</span>
<span class="nb">print</span><span class="p">(</span><span class="n">decode_predictions</span><span class="p">(</span><span class="n">prediction</span><span class="p">))</span>
</pre></div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:52.662Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.rst.txt
|
```
.. _tensorflow-ref-neuron-tracing-api:
TensorFlow 2.x (``tensorflow-neuron``) Tracing API
============================================
The Neuron tracing API enables tracing TensorFlow 2.x models for deployment
on AWS Machine Learning Accelerators.
Method
------
``tensorflow.neuron.trace``
Description
-----------
Trace a ``keras.Model`` or a Python callable that can be decorated by
``tf.function``, and return an AWS-Neuron-optimized ``keras.Model`` that
can execute on AWS Machine Learning Accelerators. Tracing is ideal for
``keras.Model`` that accepts a list of ``tf.Tensor`` objects and returns
a list of ``tf.Tensor`` objects. It is expected that users will provide
example inputs, and the ``trace`` function will execute ``func``
symbolically and convert it to a ``keras.Model``.
The returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
The returned ``keras.Model`` has an ``.on_neuron_ratio`` attribute
which shows the percentage of ops mapped to neuron hardware. This calculation
ignores PlaceholerOp, IdentityOp, ReadVariableOp and NoOp.
Options can be passed to Neuron compiler via the environment variable
``NEURON_CC_FLAGS``. For example, the syntax
``env NEURON_CC_FLAGS="--neuroncore-pipeline-cores=4"`` directs Neuron
compiler to compile each subgraph to fit in the specified number of
NeuronCores. This number can be less than the total available NeuronCores
on an Inf1 instance. See :ref:`neuron-compiler-cli-reference` for more
information about compiler options.
Arguments
---------
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **subgraph_builder_function:** (Optional) A callable with signature
``subgraph_builder_function(node : NodeDef) -> bool``
(``NodeDef`` is defined in tensorflow/core/framework/node_def.proto)
that is used as a call-back function to determine which part of
the tensorflow GraphDef given by tracing ``func`` will be placed on
Machine Learning Accelerators.
If ``subgraph_builder_function`` is not provided, then ``trace`` will
automatically place operations on Machine Learning Accelerators or
on CPU to maximize the execution efficiency.
If it is provided, and ``subgraph_builder_function(node)`` returns
``True``, and placing ``node`` on Machine Learning Accelerators
will not cause deadlocks during execution, then ``trace`` will place
``node`` on Machine Learning Accelerators. If
``subgraph_builder_function(node)`` returns ``False``, then ``trace``
will place ``node`` on CPU.
Special Flags
-------------
These are flags that get passed directly to the Neuron tracing API
(rather than the Neuron Compiler). The flags are still passed
via the environment variable ``NEURON_CC_FLAGS``.
- **workdir:** example usage - ``NEURON_CC_FLAGS='--workdir ./artifacts'``
will create a folder named artifacts in the current directory and
save artifacts that can be used for debug.
- **dynamic-batch-size:** example usage -
``NEURON_CC_FLAGS='--dynamic-batch-size'`` A flag to allow Neuron graphs to
consume variable sized batches of data. Dynamic sizing is restricted to the
0th dimension of a tensor.
- **extract-weights (EXPERIMENTAL):** example usage -
``NEURON_CC_FLAGS='--extract-weights inf1.2xlarge'`` will reduce the compiled
model's protobuf size by taking the weights out of the protobuf.
Useful for compiling large models that would exceed the 2GB protobuf
size limit. This feature is experimental. Model performance is not
guaranteed and the flag does not work in combination with
``--neuroncore-pipeline-cores``, ``--dynamic-batch-size``, models with
multiple NEFFs, and models that are 4GB or greater.
Compiles models for different neuron instances depending on the instance type passed.
Supports all inf1 instance types.
Returns
-------
- An AWS-Neuron-optimized ``keras.Model``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
model_neuron = tfn.trace(model, example_inputs) # trace
# check to see how much of the model was compiled successfully
print(model_neuron.on_neuron_ratio)
model_dir = './model_neuron'
model_neuron.save(model_dir)
model_neuron_reloaded = tf.keras.models.load_model(model_dir)
Example Usage with Manual Device Placement Using `subgraph_builder_function`
-------------
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
reshape0 = tf.keras.layers.Reshape([1, 3])(dense0)
output0 = tf.keras.layers.Dense(2)(reshape0)
model = tf.keras.Model(inputs=[input0], outputs=[output0])
example_inputs = tf.random.uniform([1, 3])
def subgraph_builder_function(node):
return node.op == 'MatMul'
model_neuron = tfn.trace(
model, example_inputs,
subgraph_builder_function=subgraph_builder_function,
)
.. important ::
Although the old API ``tensorflow.neuron.saved_model.compile`` is still available under tensorflow-neuron 2.x,
it supports only the limited capabilities of ``tensorflow.neuron.trace`` and will be deprecated in future releases.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-ref-neuron-tracing-api:
TensorFlow 2.x (``tensorflow-neuron``) Tracing API
============================================
The Neuron tracing API enables tracing TensorFlow 2.x models for deployment
on AWS Machine Learning Accelerators.
Method
------
``tensorflow.neuron.trace``
Description
-----------
Trace a ``keras.Model`` or a Python callable that can be decorated by
``tf.function``, and return an AWS-Neuron-optimized ``keras.Model`` that
can execute on AWS Machine Learning Accelerators. Tracing is ideal for
``keras.Model`` that accepts a list of ``tf.Tensor`` objects and returns
a list of ``tf.Tensor`` objects. It is expected that users will provide
example inputs, and the ``trace`` function will execute ``func``
symbolically and convert it to a ``keras.Model``.
The returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
The returned ``keras.Model`` has an ``.on_neuron_ratio`` attribute
which shows the percentage of ops mapped to neuron hardware. This calculation
ignores PlaceholerOp, IdentityOp, ReadVariableOp and NoOp.
Options can be passed to Neuron compiler via the environment variable
``NEURON_CC_FLAGS``. For example, the syntax
``env NEURON_CC_FLAGS="--neuroncore-pipeline-cores=4"`` directs Neuron
compiler to compile each subgraph to fit in the specified number of
NeuronCores. This number can be less than the total available NeuronCores
on an Inf1 instance. See :ref:`neuron-compiler-cli-reference` for more
information about compiler options.
Arguments
---------
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **subgraph_builder_function:** (Optional) A callable with signature
``subgraph_builder_function(node : NodeDef) -> bool``
(``NodeDef`` is defined in tensorflow/core/framework/node_def.proto)
that is used as a call-back function to determine which part of
the tensorflow GraphDef given by tracing ``func`` will be placed on
Machine Learning Accelerators.
If ``subgraph_builder_function`` is not provided, then ``trace`` will
automatically place operations on Machine Learning Accelerators or
on CPU to maximize the execution efficiency.
If it is provided, and ``subgraph_builder_function(node)`` returns
``True``, and placing ``node`` on Machine Learning Accelerators
will not cause deadlocks during execution, then ``trace`` will place
``node`` on Machine Learning Accelerators. If
``subgraph_builder_function(node)`` returns ``False``, then ``trace``
will place ``node`` on CPU.
Special Flags
-------------
These are flags that get passed directly to the Neuron tracing API
(rather than the Neuron Compiler). The flags are still passed
via the environment variable ``NEURON_CC_FLAGS``.
- **workdir:** example usage - ``NEURON_CC_FLAGS='--workdir ./artifacts'``
will create a folder named artifacts in the current directory and
save artifacts that can be used for debug.
- **dynamic-batch-size:** example usage -
``NEURON_CC_FLAGS='--dynamic-batch-size'`` A flag to allow Neuron graphs to
consume variable sized batches of data. Dynamic sizing is restricted to the
0th dimension of a tensor.
- **extract-weights (EXPERIMENTAL):** example usage -
``NEURON_CC_FLAGS='--extract-weights inf1.2xlarge'`` will reduce the compiled
model's protobuf size by taking the weights out of the protobuf.
Useful for compiling large models that would exceed the 2GB protobuf
size limit. This feature is experimental. Model performance is not
guaranteed and the flag does not work in combination with
``--neuroncore-pipeline-cores``, ``--dynamic-batch-size``, models with
multiple NEFFs, and models that are 4GB or greater.
Compiles models for different neuron instances depending on the instance type passed.
Supports all inf1 instance types.
Returns
-------
- An AWS-Neuron-optimized ``keras.Model``.
Example Usage
-------------
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
model = tf.keras.Model(inputs=[input0], outputs=[dense0])
example_inputs = tf.random.uniform([1, 3])
model_neuron = tfn.trace(model, example_inputs) # trace
# check to see how much of the model was compiled successfully
print(model_neuron.on_neuron_ratio)
model_dir = './model_neuron'
model_neuron.save(model_dir)
model_neuron_reloaded = tf.keras.models.load_model(model_dir)
Example Usage with Manual Device Placement Using `subgraph_builder_function`
-------------
.. code:: python
import tensorflow as tf
import tensorflow.neuron as tfn
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
reshape0 = tf.keras.layers.Reshape([1, 3])(dense0)
output0 = tf.keras.layers.Dense(2)(reshape0)
model = tf.keras.Model(inputs=[input0], outputs=[output0])
example_inputs = tf.random.uniform([1, 3])
def subgraph_builder_function(node):
return node.op == 'MatMul'
model_neuron = tfn.trace(
model, example_inputs,
subgraph_builder_function=subgraph_builder_function,
)
.. important ::
Although the old API ``tensorflow.neuron.saved_model.compile`` is still available under tensorflow-neuron 2.x,
it supports only the limited capabilities of ``tensorflow.neuron.trace`` and will be deprecated in future releases.
</pre></body></html>
|
2023-09-29T20:54:52.787Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.rst.txt
|
```
.. _tensorflow-ref-neuron-compile-api:
TensorFlow 1.x (``tensorflow-neuron``) Compilation API
============================================
The Neuron compilation API for TensorFlow 1.x enables compilation of saved
model to an Inferentia target.
Method
------
``tensorflow.neuron.saved_model.compile``
Description
-----------
Within the graph or subgraph, the compile method selects and send
Neuron-supported operations to Neuron-Compiler for compilation and saves
the compiled artifacts in the graph. Uncompilable operations are kept as
original operations for framework execution.
The compiled graph can be exported to saved model and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
Options can be passed to Neuron compiler via the compile function. For
example, the “\ ``--neuroncore-pipeline-cores``\ ” option directs Neuron
compiler to compile each subgraph to fit in the specified number of
NeuronCores. This number can be less than the total available
NeuronCores on an Inf1 instance. See :ref:`neuron-compiler-cli-reference`
for more information about compiler options.
Arguments
---------
- **model_dir:** The path of the original ``SavedModel``.
- **new_model_dir:** The path to which the Neuron-optimized
``SavedModel`` will be stored.
- **batch_size:** (Optional) Positive integer representing batch size
used in inference. The default value is 1.
- **model_shape_feed_dict:** (Optional) Dictionary {str: list} used for
inferring tensor shapes. Keys should match model input names. Values
are lists of positive integers representing model input tensor
shapes.
- **model_feed_dict:** (Optional) Dictionary {str: numpy.array} used
for inference. Useful for inferring tensor shapes. Keys should match
model input names. Values are numpy arrays that can be fed as inputs
to the ``SavedModel``.
- **tags:** (Optional) Iterable of strings to identify the required
``MetaGraphDef``. These should correspond to the tags used when
saving the variables using the ``SavedModel`` ``save()`` API. Default
is to use the first ``tag_set`` available in the ``SavedModel``.
- **signature_def_key:** (Optional) String specifying the
``signature_def`` to use. Default is to use 'serving_default' or the
first ``signature_def`` corresponding to ``tags``.
- **minimum_segment_size:** (Optional) Integer indicating the minimum
number of operations in an NeuronOp.
- **no_fuse_ops:** (Optional) None or iterable of strings (unordered)
representing names of operations that are forcibly placed on CPU.
- **compiler_args:** (Optional) List of strings representing neuron-cc
compiler arguments. Note that these arguments apply to all subgraphs
generated by whitelist partitioning. For example, use
``compiler_args=['--neuroncore-pipeline-cores', '4']`` to set number
of NeuronCores per subgraph to 4. See :ref:`neuron-compiler-cli-reference`
for more information about compiler options.
- **compiler_workdir:** (Optional) String representing work directory
of the neuron-cc compiler.
Returns
-------
- Dictionary with operator counts before/after optimization.
- Operator count statistics are displayed to show original count,
post-optimization count, and the number placed on Neuron runtime. For
example:
::
INFO:tensorflow:Number of operations in TensorFlow session: 3978
INFO:tensorflow:Number of operations after tf.neuron optimizations: 555
INFO:tensorflow:Number of operations placed on Neuron runtime: 554
Example Usage
-------------
.. code:: python
import shutil
import tensorflow.neuron as tfn
saved_model_path = "<saved model path>"
compiled_saved_model_path = "<compiled saved model path>"
shutil.rmtree(compiled_saved_model_path, ignore_errors=True)
tfn.saved_model.compile(saved_model_path, compiled_saved_model_path)
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-ref-neuron-compile-api:
TensorFlow 1.x (``tensorflow-neuron``) Compilation API
============================================
The Neuron compilation API for TensorFlow 1.x enables compilation of saved
model to an Inferentia target.
Method
------
``tensorflow.neuron.saved_model.compile``
Description
-----------
Within the graph or subgraph, the compile method selects and send
Neuron-supported operations to Neuron-Compiler for compilation and saves
the compiled artifacts in the graph. Uncompilable operations are kept as
original operations for framework execution.
The compiled graph can be exported to saved model and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
Options can be passed to Neuron compiler via the compile function. For
example, the “\ ``--neuroncore-pipeline-cores``\ ” option directs Neuron
compiler to compile each subgraph to fit in the specified number of
NeuronCores. This number can be less than the total available
NeuronCores on an Inf1 instance. See :ref:`neuron-compiler-cli-reference`
for more information about compiler options.
Arguments
---------
- **model_dir:** The path of the original ``SavedModel``.
- **new_model_dir:** The path to which the Neuron-optimized
``SavedModel`` will be stored.
- **batch_size:** (Optional) Positive integer representing batch size
used in inference. The default value is 1.
- **model_shape_feed_dict:** (Optional) Dictionary {str: list} used for
inferring tensor shapes. Keys should match model input names. Values
are lists of positive integers representing model input tensor
shapes.
- **model_feed_dict:** (Optional) Dictionary {str: numpy.array} used
for inference. Useful for inferring tensor shapes. Keys should match
model input names. Values are numpy arrays that can be fed as inputs
to the ``SavedModel``.
- **tags:** (Optional) Iterable of strings to identify the required
``MetaGraphDef``. These should correspond to the tags used when
saving the variables using the ``SavedModel`` ``save()`` API. Default
is to use the first ``tag_set`` available in the ``SavedModel``.
- **signature_def_key:** (Optional) String specifying the
``signature_def`` to use. Default is to use 'serving_default' or the
first ``signature_def`` corresponding to ``tags``.
- **minimum_segment_size:** (Optional) Integer indicating the minimum
number of operations in an NeuronOp.
- **no_fuse_ops:** (Optional) None or iterable of strings (unordered)
representing names of operations that are forcibly placed on CPU.
- **compiler_args:** (Optional) List of strings representing neuron-cc
compiler arguments. Note that these arguments apply to all subgraphs
generated by whitelist partitioning. For example, use
``compiler_args=['--neuroncore-pipeline-cores', '4']`` to set number
of NeuronCores per subgraph to 4. See :ref:`neuron-compiler-cli-reference`
for more information about compiler options.
- **compiler_workdir:** (Optional) String representing work directory
of the neuron-cc compiler.
Returns
-------
- Dictionary with operator counts before/after optimization.
- Operator count statistics are displayed to show original count,
post-optimization count, and the number placed on Neuron runtime. For
example:
::
INFO:tensorflow:Number of operations in TensorFlow session: 3978
INFO:tensorflow:Number of operations after tf.neuron optimizations: 555
INFO:tensorflow:Number of operations placed on Neuron runtime: 554
Example Usage
-------------
.. code:: python
import shutil
import tensorflow.neuron as tfn
saved_model_path = "<saved model path>"
compiled_saved_model_path = "<compiled saved model path>"
shutil.rmtree(compiled_saved_model_path, ignore_errors=True)
tfn.saved_model.compile(saved_model_path, compiled_saved_model_path)
</pre></body></html>
|
2023-09-29T20:54:52.797Z
|
|
Running Huggingface DistilBERT with TensorFlow-Neuron — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html
|
# Running Huggingface DistilBERT with TensorFlow-Neuron — AWS Neuron Documentation
## Contents
- [Setup](#Setup)
- [Download From Huggingface and Compile for AWS-Neuron](#Download-From-Huggingface-and-Compile-for-AWS-Neuron)
- [Run Basic Inference Benchmarking](#Run-Basic-Inference-Benchmarking)
## Running Huggingface DistilBERT with TensorFlow-Neuron[#](#Running-Huggingface-DistilBERT-with-TensorFlow-Neuron "Permalink to this headline")
In this tutorial you will compile and deploy DistilBERT version of HuggingFace 🤗 Transformers BERT for Inferentia using TensorFlow-Neuron. The full list of HuggingFace’s pretrained BERT models can be found in the BERT section on this page [https://huggingface.co/transformers/pretrained\_models.html](https://huggingface.co/transformers/pretrained_models.html). you can also read about HuggingFace’s pipeline feature here: [https://huggingface.co/transformers/main\_classes/pipelines.html](https://huggingface.co/transformers/main_classes/pipelines.html)
This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger, but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs.
## Setup[#](#Setup "Permalink to this headline")
To run this tutorial please follow the instructions for [TensorFlow-Neuron Setup](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/tensorflow-neuron.html#setup-tensorflow-neuron) and the [Jupyter Notebook Quickstart](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html) and set your kernel to “Python (tensorflow-neuron)” .
Next, install some additional dependencies.
```
%env TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
!pip install transformers==4.30.2
!pip install ipywidgets
```
## Download From Huggingface and Compile for AWS-Neuron[#](#Download-From-Huggingface-and-Compile-for-AWS-Neuron "Permalink to this headline")
```
import tensorflow as tf
import tensorflow_neuron as tfn
from transformers import DistilBertTokenizer, TFDistilBertModel
# Create a wrapper for the roberta model that will accept inputs as a list
# instead of a dictionary. This will allow the compiled model to be saved
# to disk with the model.save() fucntion.
class DistilBertWrapper(tf.keras.Model):
def __init__(self, model):
super().__init__()
self.model = model
def __call__(self, example_inputs):
return self.model({'input_ids' : example_inputs[0], 'attention_mask' : example_inputs[1]})
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english')
model = DistilBertWrapper(TFDistilBertModel.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english'))
batch_size = 16
# create example inputs with a batch size of 16
text = ["Paris is the <mask> of France."] * batch_size
encoded_input = tokenizer(text, return_tensors='tf', padding='max_length', max_length=64)
# turn inputs into a list
example_input = [encoded_input['input_ids'], encoded_input['attention_mask']]
#compile
model_neuron = tfn.trace(model, example_input)
print("Running on neuron:", model_neuron(example_input))
# save the model to disk to save recompilation time for next usage
model_neuron.save('./distilbert-neuron-b16')
```
## Run Basic Inference Benchmarking[#](#Run-Basic-Inference-Benchmarking "Permalink to this headline")
```
import numpy as np
import concurrent.futures
import time
reloaded_neuron_model = tf.keras.models.load_model('./distilbert-neuron-b16')
print("Reloaded model running on neuron:", reloaded_neuron_model(example_input))
num_threads = 4
num_inferences = 1000
latency_list = []
def inference_with_latency_calculation(example_input):
global latency_list
start = time.time()
result = reloaded_neuron_model(example_input)
end = time.time()
latency_list.append((end-start) * 1000)
return result
start = time.time()
with concurrent.futures.ThreadPoolExecutor(max_workers=num_threads) as executor:
futures = []
for i in range(num_inferences):
futures.append(executor.submit(inference_with_latency_calculation, example_input))
for future in concurrent.futures.as_completed(futures):
get_result = future.result()
end = time.time()
total_time = end - start
throughput = (num_inferences * batch_size)/total_time
print(f"Throughput was {throughput} samples per second.")
print(f"Latency p50 was {np.percentile(latency_list, 50)} ms")
print(f"Latency p90 was {np.percentile(latency_list, 90)} ms")
print(f"Latency p95 was {np.percentile(latency_list, 95)} ms")
print(f"Latency p99 was {np.percentile(latency_list, 99)} ms")
assert(throughput >= 1930.0)
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Running Huggingface DistilBERT with TensorFlow-Neuron — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<link rel="next" title="Utilizing Neuron Capabilities Tutorials (tensorflow-neuron)" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
<link rel="prev" title="Running TensorFlow BERT-Large with AWS Neuron" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/tensorflow/huggingface_bert/huggingface_bert", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="current nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l3 current active has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input checked="" class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul class="current">
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4 current active">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/src/examples/tensorflow/huggingface_bert/huggingface_bert.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/tensorflow/huggingface_bert/huggingface_bert.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Setup">
Setup
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-From-Huggingface-and-Compile-for-AWS-Neuron">
Download From Huggingface and Compile for AWS-Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Run-Basic-Inference-Benchmarking">
Run Basic Inference Benchmarking
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Running Huggingface DistilBERT with TensorFlow-Neuron</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Setup">
Setup
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Download-From-Huggingface-and-Compile-for-AWS-Neuron">
Download From Huggingface and Compile for AWS-Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Run-Basic-Inference-Benchmarking">
Run Basic Inference Benchmarking
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Running-Huggingface-DistilBERT-with-TensorFlow-Neuron">
<h1>Running Huggingface DistilBERT with TensorFlow-Neuron<a class="headerlink" href="#Running-Huggingface-DistilBERT-with-TensorFlow-Neuron" title="Permalink to this headline">#</a></h1>
<p>In this tutorial you will compile and deploy DistilBERT version of HuggingFace 🤗 Transformers BERT for Inferentia using TensorFlow-Neuron. The full list of HuggingFace’s pretrained BERT models can be found in the BERT section on this page <a class="reference external" href="https://huggingface.co/transformers/pretrained_models.html">https://huggingface.co/transformers/pretrained_models.html</a>. you can also read about HuggingFace’s pipeline feature here: <a class="reference external" href="https://huggingface.co/transformers/main_classes/pipelines.html">https://huggingface.co/transformers/main_classes/pipelines.html</a></p>
<p>This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger, but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs.</p>
<div class="section" id="Setup">
<h2>Setup<a class="headerlink" href="#Setup" title="Permalink to this headline">#</a></h2>
<p>To run this tutorial please follow the instructions for <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/tensorflow-neuron.html#setup-tensorflow-neuron">TensorFlow-Neuron Setup</a> and the <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html">Jupyter Notebook Quickstart</a> and set your kernel to “Python (tensorflow-neuron)” .</p>
<p>Next, install some additional dependencies.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">%</span><span class="k">env</span> TOKENIZERS_PARALLELISM=True #Supresses tokenizer warnings making errors easier to detect
<span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span><span class="nv">transformers</span><span class="o">==</span><span class="m">4</span>.30.2
<span class="o">!</span>pip<span class="w"> </span>install<span class="w"> </span>ipywidgets
</pre></div>
</div>
</div>
</div>
<div class="section" id="Download-From-Huggingface-and-Compile-for-AWS-Neuron">
<h2>Download From Huggingface and Compile for AWS-Neuron<a class="headerlink" href="#Download-From-Huggingface-and-Compile-for-AWS-Neuron" title="Permalink to this headline">#</a></h2>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">tensorflow_neuron</span> <span class="k">as</span> <span class="nn">tfn</span>
<span class="kn">from</span> <span class="nn">transformers</span> <span class="kn">import</span> <span class="n">DistilBertTokenizer</span><span class="p">,</span> <span class="n">TFDistilBertModel</span>
<span class="c1"># Create a wrapper for the roberta model that will accept inputs as a list</span>
<span class="c1"># instead of a dictionary. This will allow the compiled model to be saved</span>
<span class="c1"># to disk with the model.save() fucntion.</span>
<span class="k">class</span> <span class="nc">DistilBertWrapper</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">Model</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">model</span><span class="p">):</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">model</span> <span class="o">=</span> <span class="n">model</span>
<span class="k">def</span> <span class="fm">__call__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">example_inputs</span><span class="p">):</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">model</span><span class="p">({</span><span class="s1">'input_ids'</span> <span class="p">:</span> <span class="n">example_inputs</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="s1">'attention_mask'</span> <span class="p">:</span> <span class="n">example_inputs</span><span class="p">[</span><span class="mi">1</span><span class="p">]})</span>
<span class="n">tokenizer</span> <span class="o">=</span> <span class="n">DistilBertTokenizer</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'distilbert-base-uncased-finetuned-sst-2-english'</span><span class="p">)</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">DistilBertWrapper</span><span class="p">(</span><span class="n">TFDistilBertModel</span><span class="o">.</span><span class="n">from_pretrained</span><span class="p">(</span><span class="s1">'distilbert-base-uncased-finetuned-sst-2-english'</span><span class="p">))</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">16</span>
<span class="c1"># create example inputs with a batch size of 16</span>
<span class="n">text</span> <span class="o">=</span> <span class="p">[</span><span class="s2">"Paris is the <mask> of France."</span><span class="p">]</span> <span class="o">*</span> <span class="n">batch_size</span>
<span class="n">encoded_input</span> <span class="o">=</span> <span class="n">tokenizer</span><span class="p">(</span><span class="n">text</span><span class="p">,</span> <span class="n">return_tensors</span><span class="o">=</span><span class="s1">'tf'</span><span class="p">,</span> <span class="n">padding</span><span class="o">=</span><span class="s1">'max_length'</span><span class="p">,</span> <span class="n">max_length</span><span class="o">=</span><span class="mi">64</span><span class="p">)</span>
<span class="c1"># turn inputs into a list</span>
<span class="n">example_input</span> <span class="o">=</span> <span class="p">[</span><span class="n">encoded_input</span><span class="p">[</span><span class="s1">'input_ids'</span><span class="p">],</span> <span class="n">encoded_input</span><span class="p">[</span><span class="s1">'attention_mask'</span><span class="p">]]</span>
<span class="c1">#compile</span>
<span class="n">model_neuron</span> <span class="o">=</span> <span class="n">tfn</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">example_input</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Running on neuron:"</span><span class="p">,</span> <span class="n">model_neuron</span><span class="p">(</span><span class="n">example_input</span><span class="p">))</span>
<span class="c1"># save the model to disk to save recompilation time for next usage</span>
<span class="n">model_neuron</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s1">'./distilbert-neuron-b16'</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Run-Basic-Inference-Benchmarking">
<h2>Run Basic Inference Benchmarking<a class="headerlink" href="#Run-Basic-Inference-Benchmarking" title="Permalink to this headline">#</a></h2>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">concurrent.futures</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="n">reloaded_neuron_model</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">models</span><span class="o">.</span><span class="n">load_model</span><span class="p">(</span><span class="s1">'./distilbert-neuron-b16'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"Reloaded model running on neuron:"</span><span class="p">,</span> <span class="n">reloaded_neuron_model</span><span class="p">(</span><span class="n">example_input</span><span class="p">))</span>
<span class="n">num_threads</span> <span class="o">=</span> <span class="mi">4</span>
<span class="n">num_inferences</span> <span class="o">=</span> <span class="mi">1000</span>
<span class="n">latency_list</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">def</span> <span class="nf">inference_with_latency_calculation</span><span class="p">(</span><span class="n">example_input</span><span class="p">):</span>
<span class="k">global</span> <span class="n">latency_list</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">result</span> <span class="o">=</span> <span class="n">reloaded_neuron_model</span><span class="p">(</span><span class="n">example_input</span><span class="p">)</span>
<span class="n">end</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">latency_list</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">end</span><span class="o">-</span><span class="n">start</span><span class="p">)</span> <span class="o">*</span> <span class="mi">1000</span><span class="p">)</span>
<span class="k">return</span> <span class="n">result</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="k">with</span> <span class="n">concurrent</span><span class="o">.</span><span class="n">futures</span><span class="o">.</span><span class="n">ThreadPoolExecutor</span><span class="p">(</span><span class="n">max_workers</span><span class="o">=</span><span class="n">num_threads</span><span class="p">)</span> <span class="k">as</span> <span class="n">executor</span><span class="p">:</span>
<span class="n">futures</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_inferences</span><span class="p">):</span>
<span class="n">futures</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">executor</span><span class="o">.</span><span class="n">submit</span><span class="p">(</span><span class="n">inference_with_latency_calculation</span><span class="p">,</span> <span class="n">example_input</span><span class="p">))</span>
<span class="k">for</span> <span class="n">future</span> <span class="ow">in</span> <span class="n">concurrent</span><span class="o">.</span><span class="n">futures</span><span class="o">.</span><span class="n">as_completed</span><span class="p">(</span><span class="n">futures</span><span class="p">):</span>
<span class="n">get_result</span> <span class="o">=</span> <span class="n">future</span><span class="o">.</span><span class="n">result</span><span class="p">()</span>
<span class="n">end</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="n">total_time</span> <span class="o">=</span> <span class="n">end</span> <span class="o">-</span> <span class="n">start</span>
<span class="n">throughput</span> <span class="o">=</span> <span class="p">(</span><span class="n">num_inferences</span> <span class="o">*</span> <span class="n">batch_size</span><span class="p">)</span><span class="o">/</span><span class="n">total_time</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Throughput was </span><span class="si">{</span><span class="n">throughput</span><span class="si">}</span><span class="s2"> samples per second."</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Latency p50 was </span><span class="si">{</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span><span class="w"> </span><span class="mi">50</span><span class="p">)</span><span class="si">}</span><span class="s2"> ms"</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Latency p90 was </span><span class="si">{</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span><span class="w"> </span><span class="mi">90</span><span class="p">)</span><span class="si">}</span><span class="s2"> ms"</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Latency p95 was </span><span class="si">{</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span><span class="w"> </span><span class="mi">95</span><span class="p">)</span><span class="si">}</span><span class="s2"> ms"</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s2">"Latency p99 was </span><span class="si">{</span><span class="n">np</span><span class="o">.</span><span class="n">percentile</span><span class="p">(</span><span class="n">latency_list</span><span class="p">,</span><span class="w"> </span><span class="mi">99</span><span class="p">)</span><span class="si">}</span><span class="s2"> ms"</span><span class="p">)</span>
<span class="k">assert</span><span class="p">(</span><span class="n">throughput</span> <span class="o">>=</span> <span class="mf">1930.0</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span>
</pre></div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
<a class="left-prev" id="prev-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/bert_demo/bert_demo.html" title="previous page">
<i class="fas fa-angle-left"></i>
<div class="prev-next-info">
<p class="prev-next-subtitle">previous</p>
<p class="prev-next-title">Running TensorFlow BERT-Large with AWS Neuron</p>
</div>
</a>
<a class="right-next" id="next-link" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html" title="next page">
<div class="prev-next-info">
<p class="prev-next-subtitle">next</p>
<p class="prev-next-title">Utilizing Neuron Capabilities Tutorials (<code class="docutils literal notranslate"><span class="pre">tensorflow-neuron</span></code>)</p>
</div>
<i class="fas fa-angle-right"></i>
</a>
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:52.966Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.rst.txt
|
```
.. _tensorflow-ref-auto-replication-python-api:
TensorFlow Neuron (``tensorflow-neuron``) Auto Multicore Replication (Experimental)
===================================================================================
The Neuron auto multicore replication Python API enables modifying TensorFlow 2.x
traced models so that they can be automatically replicated across multiple cores.
For Tensorflow-Serving models and TensorFlow 1.x models, see :ref:`tensorflow-ref-auto-replication-cli-api`
.. contents:: Table of contents
:local:
:depth: 1
TensorFlow Neuron TF 2.x (``tensorflow-neuron TF2.x``) Auto Multicore Replication Python API (Experimental)
-----------------------------------------------------------------------------------------------------------
Method
^^^^^^
``tensorflow.neuron.auto_multicore``
Description
^^^^^^^^^^^
Converts an existing AWS-Neuron-optimized ``keras.Model`` and returns an auto-replication tagged
AWS-Multicore-Neuron-optimized ``keras.Model`` that can execute on AWS Machine Learning Accelerators.
Like the traced model, the returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The auto model replication feature in TensorFlow-Neuron enables you to
create a model once and the model parallel replication would happen
automatically. The desired number of cores can be less than the total available NeuronCores
on an Inf1 instance but not less than 1. This reduces framework memory usage as you are not
loading the same model multiple times manually. Calls to the returned model will execute the call
on each core in a round-robin fashion.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
Note that the automatic replication will only work on models compiled with pipeline size 1:
via ``--neuroncore-pipeline-cores=1``. If auto replication is not enabled, the model will default to
replicate on up to 4 cores.
See :ref:`neuron-compiler-cli-reference` for more information about compiler options.
Arguments
^^^^^^^^^
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **num_cores:** The desired number of cores where the model will be automatically
replicated across
Returns
^^^^^^^
- An AWS-Multicore-Neuron-optimized ``keras.Model``.
Example Python API Usage for TF2.x traced models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
inputs = [input0]
outputs = [dense0]
model = tf.keras.Model(inputs=inputs, outputs=outputs)
input0_tensor = tf.random.uniform([1, 3])
model_neuron = tfn.trace(model, input0_tensor)
num_cores = 4
multicore_model = tfn.auto_multicore(model_neuron, input0_tensor, num_cores=num_cores)
multicore_model(input0_tensor)
Example Python API Usage for TF2.x saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
from tensorflow.python import saved_model
input0_tensor = tf.random.uniform([1, 3])
num_cores = 4
reload_model = saved_model.load(model_dir)
multicore_model = tfn.auto_multicore(reload_model, input0_tensor, num_cores=num_cores)
.. _tensorflow-ref-auto-replication-cli-api:
TensorFlow Neuron TF1.x/TF2.x (``tensorflow-neuron TF1.x/TF2.x``) Auto Multicore Replication CLI (Experimental)
---------------------------------------------------------------------------------------------------------------
The Neuron auto multicore replication CLI enables modifying TensorFlow 1.x and Tensorflow 2.x
traced saved models so that they can be automatically replicated across multiple cores. By performing
this call on Tensorflow Saved Models, we can support both Tensorflow-Serving and Tensorflow 1.x
without significant modifications to the code. Note that the python API does not support Tensorflow 1.x.
Method
^^^^^^
``tf-neuron-auto-multicore MODEL_DIR --num_cores NUM_CORES --new_model_dir NEW_MODEL_DIR``
Arguments
^^^^^^^^^
- **MODEL_DIR:** The directory of a saved AWS-Neuron-optimized ``keras.Model``.
- **NUM_CORES:** The desired number of cores where the model will be automatically
replicated across
- **NEW_MODEL_DIR:** The directory of where the AWS-Multicore-Neuron-optimized
``keras.Model`` will be saved
Example CLI Usage for TF 1.x and Tensorflow-Serving saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
tf-neuron-auto-multicore ./resnet --num_cores 8 --new_model_dir ./modified_resnet
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-ref-auto-replication-python-api:
TensorFlow Neuron (``tensorflow-neuron``) Auto Multicore Replication (Experimental)
===================================================================================
The Neuron auto multicore replication Python API enables modifying TensorFlow 2.x
traced models so that they can be automatically replicated across multiple cores.
For Tensorflow-Serving models and TensorFlow 1.x models, see :ref:`tensorflow-ref-auto-replication-cli-api`
.. contents:: Table of contents
:local:
:depth: 1
TensorFlow Neuron TF 2.x (``tensorflow-neuron TF2.x``) Auto Multicore Replication Python API (Experimental)
-----------------------------------------------------------------------------------------------------------
Method
^^^^^^
``tensorflow.neuron.auto_multicore``
Description
^^^^^^^^^^^
Converts an existing AWS-Neuron-optimized ``keras.Model`` and returns an auto-replication tagged
AWS-Multicore-Neuron-optimized ``keras.Model`` that can execute on AWS Machine Learning Accelerators.
Like the traced model, the returned ``keras.Model`` will support inference only. Attributes or
variables held by the original function or ``keras.Model`` will be dropped.
The auto model replication feature in TensorFlow-Neuron enables you to
create a model once and the model parallel replication would happen
automatically. The desired number of cores can be less than the total available NeuronCores
on an Inf1 instance but not less than 1. This reduces framework memory usage as you are not
loading the same model multiple times manually. Calls to the returned model will execute the call
on each core in a round-robin fashion.
The returned ``keras.Model`` can be exported as SavedModel and served using
TensorFlow Serving. Please see :ref:`tensorflow-serving` for more
information about exporting to saved model and serving using TensorFlow
Serving.
Note that the automatic replication will only work on models compiled with pipeline size 1:
via ``--neuroncore-pipeline-cores=1``. If auto replication is not enabled, the model will default to
replicate on up to 4 cores.
See :ref:`neuron-compiler-cli-reference` for more information about compiler options.
Arguments
^^^^^^^^^
- **func:** The ``keras.Model`` or function to be traced.
- **example_inputs:** A ``tf.Tensor`` or a tuple/list/dict of
``tf.Tensor`` objects for tracing the function. When ``example_inputs``
is a ``tf.Tensor`` or a list of ``tf.Tensor`` objects, we expect
``func`` to have calling signature ``func(example_inputs)``. Otherwise,
the expectation is that inference on ``func`` is done by calling
``func(*example_inputs)`` when ``example_inputs`` is a ``tuple``,
or ``func(**example_inputs)`` when ``example_inputs`` is a ``dict``.
The case where ``func`` accepts mixed positional and keyword arguments
is currently unsupported.
- **num_cores:** The desired number of cores where the model will be automatically
replicated across
Returns
^^^^^^^
- An AWS-Multicore-Neuron-optimized ``keras.Model``.
Example Python API Usage for TF2.x traced models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
input0 = tf.keras.layers.Input(3)
dense0 = tf.keras.layers.Dense(3)(input0)
inputs = [input0]
outputs = [dense0]
model = tf.keras.Model(inputs=inputs, outputs=outputs)
input0_tensor = tf.random.uniform([1, 3])
model_neuron = tfn.trace(model, input0_tensor)
num_cores = 4
multicore_model = tfn.auto_multicore(model_neuron, input0_tensor, num_cores=num_cores)
multicore_model(input0_tensor)
Example Python API Usage for TF2.x saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
from tensorflow.python import saved_model
input0_tensor = tf.random.uniform([1, 3])
num_cores = 4
reload_model = saved_model.load(model_dir)
multicore_model = tfn.auto_multicore(reload_model, input0_tensor, num_cores=num_cores)
.. _tensorflow-ref-auto-replication-cli-api:
TensorFlow Neuron TF1.x/TF2.x (``tensorflow-neuron TF1.x/TF2.x``) Auto Multicore Replication CLI (Experimental)
---------------------------------------------------------------------------------------------------------------
The Neuron auto multicore replication CLI enables modifying TensorFlow 1.x and Tensorflow 2.x
traced saved models so that they can be automatically replicated across multiple cores. By performing
this call on Tensorflow Saved Models, we can support both Tensorflow-Serving and Tensorflow 1.x
without significant modifications to the code. Note that the python API does not support Tensorflow 1.x.
Method
^^^^^^
``tf-neuron-auto-multicore MODEL_DIR --num_cores NUM_CORES --new_model_dir NEW_MODEL_DIR``
Arguments
^^^^^^^^^
- **MODEL_DIR:** The directory of a saved AWS-Neuron-optimized ``keras.Model``.
- **NUM_CORES:** The desired number of cores where the model will be automatically
replicated across
- **NEW_MODEL_DIR:** The directory of where the AWS-Multicore-Neuron-optimized
``keras.Model`` will be saved
Example CLI Usage for TF 1.x and Tensorflow-Serving saved models:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code :: python
tf-neuron-auto-multicore ./resnet --num_cores 8 --new_model_dir ./modified_resnet
</pre></body></html>
|
2023-09-29T20:54:53.138Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.rst.txt
|
```
.. _tensorflow-neuron-rn:
.. _tensorflow-neuron-release-notes:
TensorFlow Neuron (``tensorflow-neuron (TF1.x)``) Release Notes
===============================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the tensorflow-neuron 1.x package.
.. _tf-known-issues-and-limitations:
Known Issues and Limitations - updated 08/12/2021
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Support on serialized TensorFlow 2.x custom operators is currently limited. Serializing some operators registered from TensorFlow-text through `TensorFlow Hub <https://tfhub.dev/>`_ is going to cause failure in tensorflow.neuron.trace.
- Issue: When compiling large models, user might run out of memory and
encounter this fatal error.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on a c5.4xlarge instance type or larger.
- Issue: When upgrading ``tensorflow-neuron`` with
``pip install tensorflow-neuron --upgrade``, the following error
message may appear, which is caused by ``pip`` version being too low.
::
Could not find a version that satisfies the requirement TensorFlow<1.16.0,>=1.15.0 (from tensorflow-neuron)
Solution: run a ``pip install pip --upgrade`` before upgrading
``tensorflow-neuron``.
- Issue: Some Keras routines throws the following error:
::
AttributeError: 'str' object has no attribute 'decode'.
Solution: Please downgrade `h5py` by `pip install 'h5py<3'`. This is caused by https://github.com/TensorFlow/TensorFlow/issues/44467.
tensorflow-neuron 1.x release [2.10.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 8/28/2023
* Minor updates
tensorflow-neuron 1.x release [2.9.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
* Minor updates
tensorflow-neuron 1.x release [2.8.9.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
* Minor updates
tensorflow-neuron 1.x release [2.8.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 5/1/2023
* Minor updates
tensorflow-neuron 1.x release [2.7.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 3/28/2023
* Minor updates
tensorflow-neuron 1.x release [2.6.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 2/24/2023
* Added support for TensorFlow versions 2.9 and 2.10
* End-of-support for TensorFlow versions 2.5 and 2.6
tensorflow-neuron 1.x release [2.4.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
* Introduce ``tf-neuron-auto-multicore`` tool to enable automatic data parallel on multiple NeuronCores.
* Deprecated the NEURONCORE_GROUP_SIZES environment variable.
* Minor bug fixes.
tensorflow-neuron 1.x release [2.3.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Minor bug fixes.
tensorflow-neuron 1.x release [2.1.14.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Minor bug fixes.
tensorflow-neuron 1.x release [2.1.14.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/17/2022
* Minor bug fixes.
tensorflow-neuron 1.x release [2.1.13.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/16/2022
* Fixed a bug that caused a memory leak. The memory leak was approximately 128b for each inference and
exists in all versions of TensorFlow Neuron versions part of Neuron 1.16.0 to Neuron 1.17.0 releases. see :ref:`pre-release-content`
for exact versions included in each release.
tensorflow-neuron 1.x release [2.1.6.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
* Enhanced auto data parallel (e.g. when using NEURONCORE_GROUP_SIZES=X,Y,Z,W) to support edge cases.
* Added new operators support. see :ref:`neuron-cc-ops-TensorFlow`.
tensorflow-neuron 1.x release [2.0.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
tensorflow-neuron 1.x release [2.0.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 10/27/2021
New in this release
-------------------
* TensorFlow 1.x (``tensorflow-neuron``) now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
Resolved Issues
---------------
* Fix neuron-cc argument handling bug when nothing can be compiled.
* Fixing the support of cast operators applied after constants, by Introducing support of constant-folding pass before Neuron auto-mixed-precision.
.. _11551510:
[1.15.5.1.5.1.0]
^^^^^^^^^^^^^^^^
Date: 07/02/2021
New in this release
-------------------
* Bug fixes regarding scalar inputs/outputs.
* Minor performance improvements when dynamic batch size is turned on or when model is small.
.. _11551400:
[1.15.5.1.4.0.0]
^^^^^^^^^^^^^^^^
Date: 05/28/2021
New in this release
-------------------
* Reduce the amount of input/output data movement during inference.
* Improve parallelism for dynamic batch size inference by adopting a new sharding mechanism.
* Reduce the amount of host memory usage during inference.
* tfn.saved_model.compile now generates correct code when operator Split is used as output.
* tfn.saved_model.compile now properly reads input tensor shape information from SignatureDef proto.
* tfn.saved_model.compile now terminates properly when neuron-cc compiler argument is passed but there is no successful compilation.
* Fix bug on some wrong internal tensor names when neuron-cc compiler crashes.
* Other minor bug fixes.
.. _11551330:
[1.15.5.1.3.3.0]
^^^^^^^^^^^^^^^^
Date: 05/01/2021
New in this release
-------------------
1. Minor enhancements.
.. _11551290:
[1.15.5.1.2.9.0]
^^^^^^^^^^^^^^^^
Date: 03/04/2021
New in this release
-------------------
1. Minor enhancements.
.. _11551280:
[1.15.5.1.2.8.0]
^^^^^^^^^^^^^^^^
Date: 02/24/2021
New in this release
-------------------
1. Fix for CVE-2021-3177.
.. _11551220:
[1.15.5.1.2.2.0]
^^^^^^^^^^^^^^^^
Date: 01/30/2021
New in this release
-------------------
1. Bug fixes and internal refactor.
2. Bump TensorFlow base package version to 1.15.5.
3. Introduced a new argument ``convert_constants_to_variables`` to the compilation API ``tfn.saved_model.compile``. Setting it to ``True`` can address the issue of large constants consuming too much memory in the TensorFlow runtime.
.. _11541130:
[1.15.4.1.1.3.0]
^^^^^^^^^^^^^^^^
Date: 12/23/2020
New in this release
-------------------
1. Improved logging during `tfn.saved_model.compile` to display `neuron-cc` compilation progress.
2. Small performance improvement in some edge cases by optimizing the NeuronCore-executable assignment mechanism.
.. _11541021680:
[1.15.4.1.0.2168.0]
^^^^^^^^^^^^^^^^^^^
Date: 11/17/2020
New in this release
-------------------
1. tensorflow-neuron is now a plugin package that can be used together
with TensorFlow~=1.15.0 built with ``GLIBCXX_USE_CXX11_ABI=0``.
2. Improved logging during ``tfn.saved_model.compile`` to display
``neuron-cc`` logging file path, which is useful for tracking
``neuron-cc`` compilation progress.
3. Small performance improvement by utilizing shared memory more
efficiently.
.. _11531020430:
[1.15.3.1.0.2043.0]
^^^^^^^^^^^^^^^^^^^
Date: 09/22/2020
New in this release
-------------------
1. tensorflow-neuron now automatically enables data parallel mode on
four cores in one Inferentia. In ``TensorFlow-model-server-neuron``,
most models can now fully utilize four cores automatically. In Python
TensorFlow, running threaded inference using ``>=4`` Python threads
in the same TensorFlow Session lead to full utilization of four
cores.
2. tensorflow-neuron now tries to enable dynamic batch size
automatically for a limited number of models, such as ResNet50.
3. Improved logging during ``tfn.saved_model.compile`` to display
input/output information about subgraphs that are going to be
compiled by ``neuron-cc``.
.. _11531019650:
[1.15.3.1.0.1965.0]
^^^^^^^^^^^^^^^^^^^
Date: 08/08/2020
.. _summary-1:
New in this release
-------------------
Various minor improvements.
.. _11531019530:
[1.15.3.1.0.1953.0]
^^^^^^^^^^^^^^^^^^^
Date: 08/05/2020
.. _summary-2:
New in this release
-------------------
Various minor improvements.
.. _11531018910:
[1.15.3.1.0.1891.0]
^^^^^^^^^^^^^^^^^^^
Date: 07/16/2020
.. _summary-3:
New in this release
-------------------
This version contains a few bug fixes and user experience improvements.
Dependency change
-----------------
1. Bump TensorFlow base package version number to 1.15.3
2. Add ``TensorFlow >= 1.15.0, < 1.16.0`` as an installation dependency
so that packages depending on TensorFlow can be installed together
with tensorflow-neuron without error
New Features
------------
1. ``tensorflow-neuron`` now displays a summary of model performance
when profiling is enable by setting environment variable
``NEURON_PROFILE``
Resolved Issues
---------------
1. Environment variable ``NEURON_PROFILE`` can now be set to a
non-existing path which will be automatically created
2. Fixed a bug in ``tfn.saved_model.compile`` that causes compilation
failure when ``dynamic_batch_size=True`` is specified on a SavedModel
with unknown rank inputs.
.. _11521017960:
[1.15.2.1.0.1796.0]
^^^^^^^^^^^^^^^^^^^
Date 6/11/2020
.. _summary-4:
New in this release
-------------------
This version contains a few bug fixes.
Major New Features
------------------
.. _resolved-issues-1:
Resolved Issues
---------------
1. Fixed a bug related with device placement. Now models with device
information hardcoded to GPU can be successfully compiled with
``tfn.saved_model.compile``
2. Fixed a bug in ``tfn.saved_model.compile`` that causes models
containing Reshape operators not functioning correctly when it is
compiled with ``dynamic_batch_size=True``
3. Fixed a bug in ``tfn.saved_model.compile`` that causes models
containing Table related operators to initialize incorrectly after
compilation.
Known Issues and limitations
----------------------------
.. _11521015720:
[1.15.2.1.0.1572.0]
^^^^^^^^^^^^^^^^^^^
Date: 5/11/2020
.. _summary-5:
New in this release
-------------------
This version contains some bug fixes and new features.
.. _major-new-features-1:
Major New Features
------------------
- tensorflow-neuron is now built on TensorFlow 1.15.2 instead of
TensorFlow 1.15.0
.. _resolved-issues-2:
Resolved Issues
---------------
- Fixed a bug that caused Neuron runtime resources to not all be
released when a tensorflow-neuron process terminated with in-flight
inferences
- Inference timeout value set at compile time is now correctly
recognized at runtime
Known Issues and limitations
----------------------------
.. _11501013330:
[1.15.0.1.0.1333.0]
^^^^^^^^^^^^^^^^^^^
Date: 3/26/2020
.. _summary-6:
New in this release
-------------------
.. _major-new-features-2:
Major New Features
------------------
- Improved performance between TensorFlow to Neuron runtime.
.. _resolved-issues-3:
Resolved Issues
---------------
- Fixed a bug in Neuron runtime adaptor operator's shape function when
dynamic batch size inference is enabled
- Framework method (tensorflow.neuron.saved-model.compile) improved
handling of compiler timeout termination by letting it clean up
before exiting.
.. _known-issues-and-limitations-2:
Known Issues and limitations
----------------------------
.. _11501012400:
[1.15.0.1.0.1240.0]
^^^^^^^^^^^^^^^^^^^
Date: 2/27/2020
.. _summary-7:
New in this release
-------------------
.. _major-new-features-3:
Major New Features
------------------
- Enabled runtime memory optimizations by default to improve inference
performance, specifically in cases with large input/output tensors
- tfn.saved_model.compile now displays warning message instead of
"successfully compiled" if less than 30% of operators are mapped to
Inferentia
- Improve error messages. Runtime failure error messages are now more
descriptive and also provide instructions to restart neuron-rtd when
necessary.
.. _resolved-issues-4:
Resolved Issues
---------------
.. _known-issues-and-limitations-3:
Known Issues and Limitations
----------------------------
- Issue: When compiling a large model, may encounter.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on c5.4xlarge instance type or larger.
Other Notes
-----------
.. _1150109970:
[1.15.0.1.0.997.0]
^^^^^^^^^^^^^^^^^^
Date: 1/27/2020
.. _summary-8:
New in this release
-------------------
.. _major-new-features-4:
Major New Features
------------------
- Added support for NCHW pooling operators in tfn.saved_model.compile.
.. _resolved-issues-5:
Resolved Issues
---------------
- Fixed GRPC transient status error issue.
- Fixed a graph partitioner issue with control inputs.
.. _known-issues-and-limitations-4:
Known Issues and Limitations
----------------------------
- Issue: When compiling a large model, may encounter.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on c5.4xlarge instance type or larger.
.. _other-notes-1:
Other Notes
-----------
.. _1150108030:
[1.15.0.1.0.803.0]
^^^^^^^^^^^^^^^^^^
Date: 12/20/2019
.. _summary-9:
New in this release
-------------------
.. _major-new-features-5:
Major New Features
------------------
.. _resolved-issues-6:
Resolved Issues
---------------
- Improved handling of ``tf.neuron.saved_model.compile`` arguments
.. _known-issues-and-limitations-5:
Known Issues and Limitations
----------------------------
.. _other-notes-2:
Other Notes
-----------
.. _1150107490:
[1.15.0.1.0.749.0]
^^^^^^^^^^^^^^^^^^
Date: 12/1/2019
.. _summary-10:
New in this release
-------------------
.. _major-new-features-6:
Major New Features
------------------
.. _resolved-issues-7:
Resolved Issues
---------------
- Fix race condition between model load and model unload when the
process is killed
- Remove unnecessary GRPC calls when the process is killed
.. _known-issues-and-limitations-6:
Known Issues and Limitations
----------------------------
- When compiling a large model, may encounter “terminate called after
throwing an instance of 'std::bad_alloc'”. Solution: run compilation
on c5.4xlarge instance type or larger.
- The pip package ``wrapt`` may have a conflicting version in some
installations. This is seen when this error occurs:
.. code:: bash
ERROR: Cannot uninstall 'wrapt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
To solve this, you can update wrapt to the newer version:
.. code:: bash
python3 -m pip install wrapt --ignore-installed
python3 -m pip install tensorflow-neuron
Within a Conda environment:
.. code:: bash
conda update wrapt
conda update tensorflow-neuron
.. _other-notes-3:
Other Notes
-----------
.. _1150106630:
[1.15.0.1.0.663.0]
^^^^^^^^^^^^^^^^^^
Date: 11/25/2019
.. _summary-11:
New in this release
-------------------
This version is available only in released DLAMI v26.0 and is based on
TensorFlow version 1.15.0. Please
:ref:`update <dlami-rn-known-issues>` to latest version.
.. _major-new-features-7:
Major New Features
------------------
.. _resolved-issues-8:
Resolved Issues
---------------
Known Issues and Limits
-----------------------
Models Supported
----------------
The following models have successfully run on neuron-inferentia systems
1. BERT_LARGE and BERT_BASE
2. Transformer
3. Resnet50 V1/V2
4. Inception-V2/V3/V4
.. _other-notes-4:
Other Notes
-----------
- Python versions supported:
- 3.5, 3.6, 3.7
- Linux distribution supported:
- Ubuntu 18, Amazon Linux 2
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-neuron-rn:
.. _tensorflow-neuron-release-notes:
TensorFlow Neuron (``tensorflow-neuron (TF1.x)``) Release Notes
===============================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the tensorflow-neuron 1.x package.
.. _tf-known-issues-and-limitations:
Known Issues and Limitations - updated 08/12/2021
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Support on serialized TensorFlow 2.x custom operators is currently limited. Serializing some operators registered from TensorFlow-text through `TensorFlow Hub <https://tfhub.dev/>`_ is going to cause failure in tensorflow.neuron.trace.
- Issue: When compiling large models, user might run out of memory and
encounter this fatal error.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on a c5.4xlarge instance type or larger.
- Issue: When upgrading ``tensorflow-neuron`` with
``pip install tensorflow-neuron --upgrade``, the following error
message may appear, which is caused by ``pip`` version being too low.
::
Could not find a version that satisfies the requirement TensorFlow<1.16.0,>=1.15.0 (from tensorflow-neuron)
Solution: run a ``pip install pip --upgrade`` before upgrading
``tensorflow-neuron``.
- Issue: Some Keras routines throws the following error:
::
AttributeError: 'str' object has no attribute 'decode'.
Solution: Please downgrade `h5py` by `pip install 'h5py<3'`. This is caused by https://github.com/TensorFlow/TensorFlow/issues/44467.
tensorflow-neuron 1.x release [2.10.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 8/28/2023
* Minor updates
tensorflow-neuron 1.x release [2.9.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
* Minor updates
tensorflow-neuron 1.x release [2.8.9.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
* Minor updates
tensorflow-neuron 1.x release [2.8.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 5/1/2023
* Minor updates
tensorflow-neuron 1.x release [2.7.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 3/28/2023
* Minor updates
tensorflow-neuron 1.x release [2.6.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 2/24/2023
* Added support for TensorFlow versions 2.9 and 2.10
* End-of-support for TensorFlow versions 2.5 and 2.6
tensorflow-neuron 1.x release [2.4.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
* Introduce ``tf-neuron-auto-multicore`` tool to enable automatic data parallel on multiple NeuronCores.
* Deprecated the NEURONCORE_GROUP_SIZES environment variable.
* Minor bug fixes.
tensorflow-neuron 1.x release [2.3.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Minor bug fixes.
tensorflow-neuron 1.x release [2.1.14.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Minor bug fixes.
tensorflow-neuron 1.x release [2.1.14.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/17/2022
* Minor bug fixes.
tensorflow-neuron 1.x release [2.1.13.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/16/2022
* Fixed a bug that caused a memory leak. The memory leak was approximately 128b for each inference and
exists in all versions of TensorFlow Neuron versions part of Neuron 1.16.0 to Neuron 1.17.0 releases. see :ref:`pre-release-content`
for exact versions included in each release.
tensorflow-neuron 1.x release [2.1.6.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
* Enhanced auto data parallel (e.g. when using NEURONCORE_GROUP_SIZES=X,Y,Z,W) to support edge cases.
* Added new operators support. see :ref:`neuron-cc-ops-TensorFlow`.
tensorflow-neuron 1.x release [2.0.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
tensorflow-neuron 1.x release [2.0.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 10/27/2021
New in this release
-------------------
* TensorFlow 1.x (``tensorflow-neuron``) now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
Resolved Issues
---------------
* Fix neuron-cc argument handling bug when nothing can be compiled.
* Fixing the support of cast operators applied after constants, by Introducing support of constant-folding pass before Neuron auto-mixed-precision.
.. _11551510:
[1.15.5.1.5.1.0]
^^^^^^^^^^^^^^^^
Date: 07/02/2021
New in this release
-------------------
* Bug fixes regarding scalar inputs/outputs.
* Minor performance improvements when dynamic batch size is turned on or when model is small.
.. _11551400:
[1.15.5.1.4.0.0]
^^^^^^^^^^^^^^^^
Date: 05/28/2021
New in this release
-------------------
* Reduce the amount of input/output data movement during inference.
* Improve parallelism for dynamic batch size inference by adopting a new sharding mechanism.
* Reduce the amount of host memory usage during inference.
* tfn.saved_model.compile now generates correct code when operator Split is used as output.
* tfn.saved_model.compile now properly reads input tensor shape information from SignatureDef proto.
* tfn.saved_model.compile now terminates properly when neuron-cc compiler argument is passed but there is no successful compilation.
* Fix bug on some wrong internal tensor names when neuron-cc compiler crashes.
* Other minor bug fixes.
.. _11551330:
[1.15.5.1.3.3.0]
^^^^^^^^^^^^^^^^
Date: 05/01/2021
New in this release
-------------------
1. Minor enhancements.
.. _11551290:
[1.15.5.1.2.9.0]
^^^^^^^^^^^^^^^^
Date: 03/04/2021
New in this release
-------------------
1. Minor enhancements.
.. _11551280:
[1.15.5.1.2.8.0]
^^^^^^^^^^^^^^^^
Date: 02/24/2021
New in this release
-------------------
1. Fix for CVE-2021-3177.
.. _11551220:
[1.15.5.1.2.2.0]
^^^^^^^^^^^^^^^^
Date: 01/30/2021
New in this release
-------------------
1. Bug fixes and internal refactor.
2. Bump TensorFlow base package version to 1.15.5.
3. Introduced a new argument ``convert_constants_to_variables`` to the compilation API ``tfn.saved_model.compile``. Setting it to ``True`` can address the issue of large constants consuming too much memory in the TensorFlow runtime.
.. _11541130:
[1.15.4.1.1.3.0]
^^^^^^^^^^^^^^^^
Date: 12/23/2020
New in this release
-------------------
1. Improved logging during `tfn.saved_model.compile` to display `neuron-cc` compilation progress.
2. Small performance improvement in some edge cases by optimizing the NeuronCore-executable assignment mechanism.
.. _11541021680:
[1.15.4.1.0.2168.0]
^^^^^^^^^^^^^^^^^^^
Date: 11/17/2020
New in this release
-------------------
1. tensorflow-neuron is now a plugin package that can be used together
with TensorFlow~=1.15.0 built with ``GLIBCXX_USE_CXX11_ABI=0``.
2. Improved logging during ``tfn.saved_model.compile`` to display
``neuron-cc`` logging file path, which is useful for tracking
``neuron-cc`` compilation progress.
3. Small performance improvement by utilizing shared memory more
efficiently.
.. _11531020430:
[1.15.3.1.0.2043.0]
^^^^^^^^^^^^^^^^^^^
Date: 09/22/2020
New in this release
-------------------
1. tensorflow-neuron now automatically enables data parallel mode on
four cores in one Inferentia. In ``TensorFlow-model-server-neuron``,
most models can now fully utilize four cores automatically. In Python
TensorFlow, running threaded inference using ``>=4`` Python threads
in the same TensorFlow Session lead to full utilization of four
cores.
2. tensorflow-neuron now tries to enable dynamic batch size
automatically for a limited number of models, such as ResNet50.
3. Improved logging during ``tfn.saved_model.compile`` to display
input/output information about subgraphs that are going to be
compiled by ``neuron-cc``.
.. _11531019650:
[1.15.3.1.0.1965.0]
^^^^^^^^^^^^^^^^^^^
Date: 08/08/2020
.. _summary-1:
New in this release
-------------------
Various minor improvements.
.. _11531019530:
[1.15.3.1.0.1953.0]
^^^^^^^^^^^^^^^^^^^
Date: 08/05/2020
.. _summary-2:
New in this release
-------------------
Various minor improvements.
.. _11531018910:
[1.15.3.1.0.1891.0]
^^^^^^^^^^^^^^^^^^^
Date: 07/16/2020
.. _summary-3:
New in this release
-------------------
This version contains a few bug fixes and user experience improvements.
Dependency change
-----------------
1. Bump TensorFlow base package version number to 1.15.3
2. Add ``TensorFlow >= 1.15.0, < 1.16.0`` as an installation dependency
so that packages depending on TensorFlow can be installed together
with tensorflow-neuron without error
New Features
------------
1. ``tensorflow-neuron`` now displays a summary of model performance
when profiling is enable by setting environment variable
``NEURON_PROFILE``
Resolved Issues
---------------
1. Environment variable ``NEURON_PROFILE`` can now be set to a
non-existing path which will be automatically created
2. Fixed a bug in ``tfn.saved_model.compile`` that causes compilation
failure when ``dynamic_batch_size=True`` is specified on a SavedModel
with unknown rank inputs.
.. _11521017960:
[1.15.2.1.0.1796.0]
^^^^^^^^^^^^^^^^^^^
Date 6/11/2020
.. _summary-4:
New in this release
-------------------
This version contains a few bug fixes.
Major New Features
------------------
.. _resolved-issues-1:
Resolved Issues
---------------
1. Fixed a bug related with device placement. Now models with device
information hardcoded to GPU can be successfully compiled with
``tfn.saved_model.compile``
2. Fixed a bug in ``tfn.saved_model.compile`` that causes models
containing Reshape operators not functioning correctly when it is
compiled with ``dynamic_batch_size=True``
3. Fixed a bug in ``tfn.saved_model.compile`` that causes models
containing Table related operators to initialize incorrectly after
compilation.
Known Issues and limitations
----------------------------
.. _11521015720:
[1.15.2.1.0.1572.0]
^^^^^^^^^^^^^^^^^^^
Date: 5/11/2020
.. _summary-5:
New in this release
-------------------
This version contains some bug fixes and new features.
.. _major-new-features-1:
Major New Features
------------------
- tensorflow-neuron is now built on TensorFlow 1.15.2 instead of
TensorFlow 1.15.0
.. _resolved-issues-2:
Resolved Issues
---------------
- Fixed a bug that caused Neuron runtime resources to not all be
released when a tensorflow-neuron process terminated with in-flight
inferences
- Inference timeout value set at compile time is now correctly
recognized at runtime
Known Issues and limitations
----------------------------
.. _11501013330:
[1.15.0.1.0.1333.0]
^^^^^^^^^^^^^^^^^^^
Date: 3/26/2020
.. _summary-6:
New in this release
-------------------
.. _major-new-features-2:
Major New Features
------------------
- Improved performance between TensorFlow to Neuron runtime.
.. _resolved-issues-3:
Resolved Issues
---------------
- Fixed a bug in Neuron runtime adaptor operator's shape function when
dynamic batch size inference is enabled
- Framework method (tensorflow.neuron.saved-model.compile) improved
handling of compiler timeout termination by letting it clean up
before exiting.
.. _known-issues-and-limitations-2:
Known Issues and limitations
----------------------------
.. _11501012400:
[1.15.0.1.0.1240.0]
^^^^^^^^^^^^^^^^^^^
Date: 2/27/2020
.. _summary-7:
New in this release
-------------------
.. _major-new-features-3:
Major New Features
------------------
- Enabled runtime memory optimizations by default to improve inference
performance, specifically in cases with large input/output tensors
- tfn.saved_model.compile now displays warning message instead of
"successfully compiled" if less than 30% of operators are mapped to
Inferentia
- Improve error messages. Runtime failure error messages are now more
descriptive and also provide instructions to restart neuron-rtd when
necessary.
.. _resolved-issues-4:
Resolved Issues
---------------
.. _known-issues-and-limitations-3:
Known Issues and Limitations
----------------------------
- Issue: When compiling a large model, may encounter.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on c5.4xlarge instance type or larger.
Other Notes
-----------
.. _1150109970:
[1.15.0.1.0.997.0]
^^^^^^^^^^^^^^^^^^
Date: 1/27/2020
.. _summary-8:
New in this release
-------------------
.. _major-new-features-4:
Major New Features
------------------
- Added support for NCHW pooling operators in tfn.saved_model.compile.
.. _resolved-issues-5:
Resolved Issues
---------------
- Fixed GRPC transient status error issue.
- Fixed a graph partitioner issue with control inputs.
.. _known-issues-and-limitations-4:
Known Issues and Limitations
----------------------------
- Issue: When compiling a large model, may encounter.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on c5.4xlarge instance type or larger.
.. _other-notes-1:
Other Notes
-----------
.. _1150108030:
[1.15.0.1.0.803.0]
^^^^^^^^^^^^^^^^^^
Date: 12/20/2019
.. _summary-9:
New in this release
-------------------
.. _major-new-features-5:
Major New Features
------------------
.. _resolved-issues-6:
Resolved Issues
---------------
- Improved handling of ``tf.neuron.saved_model.compile`` arguments
.. _known-issues-and-limitations-5:
Known Issues and Limitations
----------------------------
.. _other-notes-2:
Other Notes
-----------
.. _1150107490:
[1.15.0.1.0.749.0]
^^^^^^^^^^^^^^^^^^
Date: 12/1/2019
.. _summary-10:
New in this release
-------------------
.. _major-new-features-6:
Major New Features
------------------
.. _resolved-issues-7:
Resolved Issues
---------------
- Fix race condition between model load and model unload when the
process is killed
- Remove unnecessary GRPC calls when the process is killed
.. _known-issues-and-limitations-6:
Known Issues and Limitations
----------------------------
- When compiling a large model, may encounter “terminate called after
throwing an instance of 'std::bad_alloc'”. Solution: run compilation
on c5.4xlarge instance type or larger.
- The pip package ``wrapt`` may have a conflicting version in some
installations. This is seen when this error occurs:
.. code:: bash
ERROR: Cannot uninstall 'wrapt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
To solve this, you can update wrapt to the newer version:
.. code:: bash
python3 -m pip install wrapt --ignore-installed
python3 -m pip install tensorflow-neuron
Within a Conda environment:
.. code:: bash
conda update wrapt
conda update tensorflow-neuron
.. _other-notes-3:
Other Notes
-----------
.. _1150106630:
[1.15.0.1.0.663.0]
^^^^^^^^^^^^^^^^^^
Date: 11/25/2019
.. _summary-11:
New in this release
-------------------
This version is available only in released DLAMI v26.0 and is based on
TensorFlow version 1.15.0. Please
:ref:`update <dlami-rn-known-issues>` to latest version.
.. _major-new-features-7:
Major New Features
------------------
.. _resolved-issues-8:
Resolved Issues
---------------
Known Issues and Limits
-----------------------
Models Supported
----------------
The following models have successfully run on neuron-inferentia systems
1. BERT_LARGE and BERT_BASE
2. Transformer
3. Resnet50 V1/V2
4. Inception-V2/V3/V4
.. _other-notes-4:
Other Notes
-----------
- Python versions supported:
- 3.5, 3.6, 3.7
- Linux distribution supported:
- Ubuntu 18, Amazon Linux 2
</pre></body></html>
|
2023-09-29T20:54:53.151Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.rst.txt
|
```
Misc (``tensorflow-neuron``)
============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron
/release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2
/frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops
/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow
.. include:: /frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (``tensorflow-neuron``)
============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron
/release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2
/frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops
/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow
.. include:: /frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.txt
</pre></body></html>
|
2023-09-29T20:54:53.158Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.rst.txt
|
```
.. _tensorflow-ref-neuron-accelerated-ops:
TensorFlow Neuron (``tensorflow-neuron (TF2.x)``) Accelerated (torch-neuron) Python APIs and Graph Ops
======================================================================================================
This page lists TensorFlow 2.x Python APIs and graph operators that are
accelerated by AWS Neuron. The lists are not exhaustive. TensorFlow 2.x Python
APIs or graph operators that are not listed here may still be accelerated if
they are composed of accelerated primitives, or they will be executed on CPU
without significant acceleration. The TensorFlow Neuron integration contains
an automatic operator-device-placement mechanism that strives to maximize
the execution efficiency of your deep learning models on AWS Machine Learning
ASIC instances.
Accelerated Python APIs
--------------------------------
+---------------+-----------------------------------+-----------------------------------------------------------+
| Module | Accelerated Python API | Comments |
+===============+===================================+===========================================================+
| ``tf`` | ``tf.abs`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.add`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.add_n`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.broadcast_static_shape`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.cast`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.constant`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.convert_to_tensor`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.cumsum`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.einsum`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.erf`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.exp`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.identity`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.matmul`` | Uses float16/bfloat16 matmul with float32 accumulation. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.maximum`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.minimum`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.multiply`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.negative`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.range`` | ``start``, ``limit`` and ``delta`` arguments must be |
| | | compile-time constants. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.realdiv`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reciprocal`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_all`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_any`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_max`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_min`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_prod`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_sum`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reshape`` | ``shape`` argument must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.rsqrt`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.scalar_mul`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.shape`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.shape_n`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.sigmoid`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.size`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.slice`` | ``size`` must be a compile-time constant. In addition, |
| | | |
| | | either ``begin`` must be a compile-time constant or |
| | | |
| | | ``size`` must be non-negative. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.sqrt`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.square`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.squared_difference`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.squeeze`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.stack`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.stop_gradient`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.strided_slice`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.tanh`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.tensordot`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.to_bfloat16`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.to_float`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.truediv`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| ``tf.layers`` | ``tf.layers.batch_normalization`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.layers.dense`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.layers.flatten`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| ``tf.nn`` | ``tf.nn.batch_normalization`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.bias_add`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.dropout`` | Always treated as ``tf.identity`` during inference. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.fused_batch_norm`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.leaky_relu`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.relu`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.relu6`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.relu_layer`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.softmax`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
Accelerated graph operators
--------------------------------
.. code:: python
Add
AddN
AddV2
BatchMatMul
BatchMatMulV2
BiasAdd
Cast
Const
Cumsum
Einsum
Erf
Exp
ExpandDims
FusedBatchNorm
FusedBatchNormV2
FusedBatchNormV3
Greater
Identity
LeakyRelu
MatMul
Max
Maximum
Minimum
Mean
Mul
Neg
Pack
RealDiv
Relu
Relu6
Reshape
Rsqrt
Sigmoid
Softmax
Split
SplitV
Sqrt
Square
SquaredDifference
Squeeze
StridedSlice
Sub
Sum
Tanh
Transpose
Unpack
The lists share many commonalities with `Available TensorFlow Ops <https://cloud.google.com/tpu/docs/tensorflow-ops>`_. Portions of this page are modifications based on work created and `shared by Google <https://developers.google.com/terms/site-policies>`_ and used according to terms described in the `Creative Commons 4.0 Attribution License <https://creativecommons.org/licenses/by/4.0/>`_.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-ref-neuron-accelerated-ops:
TensorFlow Neuron (``tensorflow-neuron (TF2.x)``) Accelerated (torch-neuron) Python APIs and Graph Ops
======================================================================================================
This page lists TensorFlow 2.x Python APIs and graph operators that are
accelerated by AWS Neuron. The lists are not exhaustive. TensorFlow 2.x Python
APIs or graph operators that are not listed here may still be accelerated if
they are composed of accelerated primitives, or they will be executed on CPU
without significant acceleration. The TensorFlow Neuron integration contains
an automatic operator-device-placement mechanism that strives to maximize
the execution efficiency of your deep learning models on AWS Machine Learning
ASIC instances.
Accelerated Python APIs
--------------------------------
+---------------+-----------------------------------+-----------------------------------------------------------+
| Module | Accelerated Python API | Comments |
+===============+===================================+===========================================================+
| ``tf`` | ``tf.abs`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.add`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.add_n`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.broadcast_static_shape`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.cast`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.constant`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.convert_to_tensor`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.cumsum`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.einsum`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.erf`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.exp`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.identity`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.matmul`` | Uses float16/bfloat16 matmul with float32 accumulation. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.maximum`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.minimum`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.multiply`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.negative`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.range`` | ``start``, ``limit`` and ``delta`` arguments must be |
| | | compile-time constants. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.realdiv`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reciprocal`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_all`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_any`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_max`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_min`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_prod`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reduce_sum`` | ``axis`` must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.reshape`` | ``shape`` argument must be a compile-time constant. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.rsqrt`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.scalar_mul`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.shape`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.shape_n`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.sigmoid`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.size`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.slice`` | ``size`` must be a compile-time constant. In addition, |
| | | |
| | | either ``begin`` must be a compile-time constant or |
| | | |
| | | ``size`` must be non-negative. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.sqrt`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.square`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.squared_difference`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.squeeze`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.stack`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.stop_gradient`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.strided_slice`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.tanh`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.tensordot`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.to_bfloat16`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.to_float`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.truediv`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| ``tf.layers`` | ``tf.layers.batch_normalization`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.layers.dense`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.layers.flatten`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| ``tf.nn`` | ``tf.nn.batch_normalization`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.bias_add`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.dropout`` | Always treated as ``tf.identity`` during inference. |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.fused_batch_norm`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.leaky_relu`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.relu`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.relu6`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.relu_layer`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
| | ``tf.nn.softmax`` | |
+---------------+-----------------------------------+-----------------------------------------------------------+
Accelerated graph operators
--------------------------------
.. code:: python
Add
AddN
AddV2
BatchMatMul
BatchMatMulV2
BiasAdd
Cast
Const
Cumsum
Einsum
Erf
Exp
ExpandDims
FusedBatchNorm
FusedBatchNormV2
FusedBatchNormV3
Greater
Identity
LeakyRelu
MatMul
Max
Maximum
Minimum
Mean
Mul
Neg
Pack
RealDiv
Relu
Relu6
Reshape
Rsqrt
Sigmoid
Softmax
Split
SplitV
Sqrt
Square
SquaredDifference
Squeeze
StridedSlice
Sub
Sum
Tanh
Transpose
Unpack
The lists share many commonalities with `Available TensorFlow Ops <https://cloud.google.com/tpu/docs/tensorflow-ops>`_. Portions of this page are modifications based on work created and `shared by Google <https://developers.google.com/terms/site-policies>`_ and used according to terms described in the `Creative Commons 4.0 Attribution License <https://creativecommons.org/licenses/by/4.0/>`_.</pre></body></html>
|
2023-09-29T20:54:53.172Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.rst.txt
|
```
.. _neuron-cc-ops-tensorflow:
TensorFlow Neuron (``tensorflow-neuron (TF1.x)``) Supported operators
=====================================================================
To see a list of supported operators for TensorFlow 1.x, run the following command:
``neuron-cc list-operators --framework TENSORFLOW``
.. _neuron-compiler-release-1910:
Neuron Compiler Release [1.9.1.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: 01/20/2022
Added
::
isNan
FusedBatchNormV3
.. _neuron-compiler-release-1730:
Neuron Compiler Release [1.7.3.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
ArgMax
ArgMin
.. _neuron-compiler-release-16130:
Neuron Compiler Release [1.6.13.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1550:
Neuron Compiler Release [1.5.5.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1400:
Neuron Compiler Release [1.4.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1300:
Neuron Compiler Release [1.3.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
Abs
Cos
DepthwiseConv2dNative
Erf
Rank
Sin
Size
.. _neuron-compiler-release-1270:
Neuron Compiler Release [1.2.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1220:
Neuron Compiler Release [1.2.2.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
AdjustContrastv2
AdjustSaturation
BroadcastTo
Cholesky
Conv2DBackpropInput
Conv3D
CropAndResize
FloorDiv
HSVToRGB
InvertPermutation
L2Loss
Log1p
MatrixBandPart
MatrixDiag
MatrixSetDiag
MatrixTriangularSolve
MaxPool3D
MirrorPad
RGBToHSV
Range
SoftmaxCrossEntropyWithLogits
SquaredDifference
StopGradient
Unpack
UnsortedSegmentSum
.. _neuron-compiler-release-10240450:
Neuron Compiler Release [1.0.24045.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added ``FloorDiv``, ``Softplus``, ``Unstack``
.. _neuron-compiler-release-1018001:
Neuron Compiler Release [1.0.18001]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1016764:
Neuron Compiler Release [1.0.16764]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added:
::
LogSoftmax
Neg
ResizeBilinear
ResizeNearestNeighbor
.. _neuron-compiler-release-1015275:
Neuron Compiler Release [1.0.15275]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
Neg
Removed
::
Log
(was inadvertently advertised as supported)
.. _neuron-compiler-release-1012696:
Neuron Compiler Release [1.0.12696]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-109410:
Neuron Compiler Release [1.0.9410]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-107878:
Neuron Compiler Release [1.0.7878]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-106801:
Neuron Compiler Release [1.0.6801]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-105939:
Neuron Compiler Release [1.0.5939]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-105301:
Neuron Compiler Release [1.0.5301]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1046800:
Neuron Compiler Release [1.0.4680.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
Add
AddV2
All
AvgPool
BatchMatMul
BatchMatMulV2
BatchToSpaceND
BiasAdd
Cast
Ceil
Concat
ConcatV2
Const
Conv2D
Equal
Exp
ExpandDims
Fill
Floor
FusedBatchNorm
Greater
GreaterEqual
Identity
LRN
LeakyRelu
Less
LessEqual
Log
LogicalAnd
LogicalNot
LogicalOr
MatMul
Max
MaxPool
Maximum
Mean
Min
Minimum
Mul
NoOp
NotEqual
Pack
Pad
PadV2
Placeholder
Pow
Prod
RandomUniform
RealDiv
Reciprocal
Relu
Relu6
Reshape
ReverseV2
Round
Rsqrt
Select
Shape
Sigmoid
Sign
Slice
Softmax
SpaceToBatchND
Split
SplitV
Sqrt
Square
Squeeze
StridedSlice
Sub
Sum
Tanh
Tile
Transpose
ZerosLike
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-cc-ops-tensorflow:
TensorFlow Neuron (``tensorflow-neuron (TF1.x)``) Supported operators
=====================================================================
To see a list of supported operators for TensorFlow 1.x, run the following command:
``neuron-cc list-operators --framework TENSORFLOW``
.. _neuron-compiler-release-1910:
Neuron Compiler Release [1.9.1.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Date: 01/20/2022
Added
::
isNan
FusedBatchNormV3
.. _neuron-compiler-release-1730:
Neuron Compiler Release [1.7.3.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
ArgMax
ArgMin
.. _neuron-compiler-release-16130:
Neuron Compiler Release [1.6.13.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1550:
Neuron Compiler Release [1.5.5.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1400:
Neuron Compiler Release [1.4.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1300:
Neuron Compiler Release [1.3.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
Abs
Cos
DepthwiseConv2dNative
Erf
Rank
Sin
Size
.. _neuron-compiler-release-1270:
Neuron Compiler Release [1.2.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1220:
Neuron Compiler Release [1.2.2.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
AdjustContrastv2
AdjustSaturation
BroadcastTo
Cholesky
Conv2DBackpropInput
Conv3D
CropAndResize
FloorDiv
HSVToRGB
InvertPermutation
L2Loss
Log1p
MatrixBandPart
MatrixDiag
MatrixSetDiag
MatrixTriangularSolve
MaxPool3D
MirrorPad
RGBToHSV
Range
SoftmaxCrossEntropyWithLogits
SquaredDifference
StopGradient
Unpack
UnsortedSegmentSum
.. _neuron-compiler-release-10240450:
Neuron Compiler Release [1.0.24045.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added ``FloorDiv``, ``Softplus``, ``Unstack``
.. _neuron-compiler-release-1018001:
Neuron Compiler Release [1.0.18001]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1016764:
Neuron Compiler Release [1.0.16764]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added:
::
LogSoftmax
Neg
ResizeBilinear
ResizeNearestNeighbor
.. _neuron-compiler-release-1015275:
Neuron Compiler Release [1.0.15275]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
Neg
Removed
::
Log
(was inadvertently advertised as supported)
.. _neuron-compiler-release-1012696:
Neuron Compiler Release [1.0.12696]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-109410:
Neuron Compiler Release [1.0.9410]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-107878:
Neuron Compiler Release [1.0.7878]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-106801:
Neuron Compiler Release [1.0.6801]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-105939:
Neuron Compiler Release [1.0.5939]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-105301:
Neuron Compiler Release [1.0.5301]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1046800:
Neuron Compiler Release [1.0.4680.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
Add
AddV2
All
AvgPool
BatchMatMul
BatchMatMulV2
BatchToSpaceND
BiasAdd
Cast
Ceil
Concat
ConcatV2
Const
Conv2D
Equal
Exp
ExpandDims
Fill
Floor
FusedBatchNorm
Greater
GreaterEqual
Identity
LRN
LeakyRelu
Less
LessEqual
Log
LogicalAnd
LogicalNot
LogicalOr
MatMul
Max
MaxPool
Maximum
Mean
Min
Minimum
Mul
NoOp
NotEqual
Pack
Pad
PadV2
Placeholder
Pow
Prod
RandomUniform
RealDiv
Reciprocal
Relu
Relu6
Reshape
ReverseV2
Round
Rsqrt
Select
Shape
Sigmoid
Sign
Slice
Softmax
SpaceToBatchND
Split
SplitV
Sqrt
Square
Squeeze
StridedSlice
Sub
Sum
Tanh
Tile
Transpose
ZerosLike
</pre></body></html>
|
2023-09-29T20:54:53.251Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.rst.txt
|
```
.. _tensorflow-neuron-rn-v2:
TensorFlow Neuron (``tensorflow-neuron (TF2.x)``) Release Notes
===============================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the tensorflow-neuron 2.x packages.
.. _tf-known-issues-and-limitations:
Known Issues and Limitations - updated 08/12/2021
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Support on serialized TensorFlow 2.x custom operators is currently limited. Serializing some operators registered from tensorflow-text through `TensorFlow Hub <https://tfhub.dev/>`_ is going to cause failure in tensorflow.neuron.trace.
- Memory leak exists on latest releases of TensorFlow Neuron for versions 2.1, 2.2, 2.3, and 2.4.
- Issue: When compiling large models, user might run out of memory and
encounter this fatal error.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on a c5.4xlarge instance type or larger.
- Issue: When upgrading ``tensorflow-neuron`` with
``pip install tensorflow-neuron --upgrade``, the following error
message may appear, which is caused by ``pip`` version being too low.
::
Could not find a version that satisfies the requirement tensorflow<1.16.0,>=1.15.0 (from tensorflow-neuron)
Solution: run a ``pip install pip --upgrade`` before upgrading
``tensorflow-neuron``.
- Issue: Some Keras routines throws the following error:
::
AttributeError: 'str' object has no attribute 'decode'.
Solution: Please downgrade `h5py` by `pip install 'h5py<3'`. This is caused by https://github.com/TensorFlow/TensorFlow/issues/44467.
tensorflow-neuron 2.x release [2.10.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 09/15/2023
* Minor updates.
tensorflow-neuron 2.x release [2.9.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
* Minor updates.
tensorflow-neuron 2.x release [2.8.9.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
* Added Python 3.10 support.
tensorflow-neuron 2.x release [2.8.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 05/01/2023
* Added support for tracing models larger than 2 GB through the environment variable ``NEURON_CC_FLAGS='--extract-weights INSTANCE_TYPE'`` for all inf1 instance types.
* Neuron release 2.10 release will be the last release that will include support for tensorflow-neuron version 2.7. Future Neuron releases will not include tensorflow-neuron version 2.7.
tensorflow-neuron 2.x release [2.7.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/19/2023
* Minor updates.
tensorflow-neuron 2.x release [2.7.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/28/2023
* Introduce the ``tfn.analyze_model`` function that displays information about the supported and unsupported operators of a traceable model.
* Introduce the ``on_neuron_ratio`` attribute of AWS Optimized Neuron Models returned by ``tfn.trace``, which is the percentage of ops on neuron after compilation.
tensorflow-neuron 2.x release [2.6.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/24/2023
* Minor updates.
tensorflow-neuron 2.x release [2.6.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 2/24/2023
* Minor bug fixes.
tensorflow-neuron 2.x release [2.4.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/22/2022
* Experimental support for tracing models larger than 2 GB through environment variable ``NEURON_CC_FLAGS='--extract-weights'``.
* Introduce ``tfn.auto_multicore`` Python API to enable automatic data parallel on multiple NeuronCores.
* Introduce ``tf-neuron-auto-multicore`` tool to enable automatic data parallel on multiple NeuronCores.
* Deprecated the NEURONCORE_GROUP_SIZES environment variable.
* Minor bug fixes.
tensorflow-neuron 2.x release [2.3.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Added support for Tensorflow 2.8.0.
* Added support for Slice operator
* The graph partitioner now prefers to place less compute intensive operators on CPU if the model already contains a large amount of compute intensive operators.
* Fixed `Github issue #408 <https://github.com/aws/aws-neuron-sdk/issues/408>`_, the fix solves data type handling bug in ``tfn.trace`` when the model contains Conv2D operators.
tensorflow-neuron 2.x release [2.2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Updated TensorFlow 2.5 to version 2.5.3.
* Added support for TensorFlow 2.6 and 2.7.
* Added a warning message when calling ``tfn.saved_model.compile`` API. In tensorflow-neuron 2.x you should call :ref:`tensorflow.neuron.trace <tensorflow-ref-neuron-tracing-api>`. ``tfn.saved_model.compile`` API supports only partial functionality of :ref:`tensorflow.neuron.trace <tensorflow-ref-neuron-tracing-api>` and will be deprecated in the future.
tensorflow-neuron 2.x release [2.1.14.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/17/2022
* Fixed a bug in TensorFlow Neuron versions 2.1, 2.2. 2.3 and 2.4. The fixed bug was causing a memory leak of 128 bytes for each inference.
* Improved warning message when calling deprecated compilation API under tensorflow-neuron 2.x.
tensorflow-neuron 2.x release [2.1.13.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/16/2022
* Fixed a bug that caused a memory leak. The memory leak was approximately 128b for each inference and
exists in all versions of TensorFlow Neuron versions part of Neuron 1.16.0 to Neuron 1.17.0 releases. see :ref:`pre-release-content`
for exact versions included in each release. This release only addresses the leak in TensorFlow Neuron 2.5. Future release of TensorFlow Neuron will fix the leak in other versions as well (2.1, 2.2, 2.3, 2.4).
tensorflow-neuron 2.x release [2.1.6.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
* Updated TensorFlow 2.5 to version 2.5.2.
* Enhanced auto data parallel (e.g. when using NEURONCORE_GROUP_SIZES=X,Y,Z,W) to support edge cases.
* Fixed a bug that may cause tensorflow-neuron to generate in some cases scalar gather instruction with incorrect arguments.
tensorflow-neuron 2.x release [2.0.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
tensorflow-neuron 2.x release [2.0.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 10/27/2021
New in this release
-------------------
* TensorFlow 2.x (``tensorflow-neuron``) now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
* Updated TensorFlow 2.3.x from TensorFlow 2.3.3 to TensorFlow 2.3.4.
* Updated TensorFlow 2.4.x from TensorFlow 2.4.2 to TensorFlow 2.4.3.
* Updated TensorFlow 2.5.x from TensorFlow 2.5.0 to TensorFlow 2.5.1.
Resolved Issues
---------------
* Fix bug that can cause illegal compiler optimizations
* Fix bug that can cause dynamic-shape operators be placed on Neuron
.. _2501680:
tensorflow-neuron 2.x release [1.6.8.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 08/12/2021
New in this release
-------------------
* First release of TensorFlow 2.x integration, Neuron support now TensorFlow versions 2.1.4, 2.2.3, 2.3.3, 2.4.2, and 2.5.0.
* New public API tensorflow.neuron.trace: trace a TensorFlow 2.x keras.Model or a Python callable that can be decorated by tf.function, and return an AWS-Neuron-optimized keras.Model that can execute on AWS Machine Learning Accelerators.
**Please note** that TensorFlow 1.x SavedModel compilation API tensorflow.neuron.saved_model.compile is not supported in tensorflow-neuron 2.x . It continues to function in tensorflow-neuron 1.15.x .
* Included versions:
- tensorflow-neuron-2.5.0.1.6.8.0
- tensorflow-neuron-2.4.2.1.6.8.0
- tensorflow-neuron-2.3.3.1.6.8.0
- tensorflow-neuron-2.2.3.1.6.8.0
- tensorflow-neuron-2.1.4.1.6.8.0
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensorflow-neuron-rn-v2:
TensorFlow Neuron (``tensorflow-neuron (TF2.x)``) Release Notes
===============================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for the tensorflow-neuron 2.x packages.
.. _tf-known-issues-and-limitations:
Known Issues and Limitations - updated 08/12/2021
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- Support on serialized TensorFlow 2.x custom operators is currently limited. Serializing some operators registered from tensorflow-text through `TensorFlow Hub <https://tfhub.dev/>`_ is going to cause failure in tensorflow.neuron.trace.
- Memory leak exists on latest releases of TensorFlow Neuron for versions 2.1, 2.2, 2.3, and 2.4.
- Issue: When compiling large models, user might run out of memory and
encounter this fatal error.
::
terminate called after throwing an instance of 'std::bad_alloc'
Solution: run compilation on a c5.4xlarge instance type or larger.
- Issue: When upgrading ``tensorflow-neuron`` with
``pip install tensorflow-neuron --upgrade``, the following error
message may appear, which is caused by ``pip`` version being too low.
::
Could not find a version that satisfies the requirement tensorflow<1.16.0,>=1.15.0 (from tensorflow-neuron)
Solution: run a ``pip install pip --upgrade`` before upgrading
``tensorflow-neuron``.
- Issue: Some Keras routines throws the following error:
::
AttributeError: 'str' object has no attribute 'decode'.
Solution: Please downgrade `h5py` by `pip install 'h5py<3'`. This is caused by https://github.com/TensorFlow/TensorFlow/issues/44467.
tensorflow-neuron 2.x release [2.10.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 09/15/2023
* Minor updates.
tensorflow-neuron 2.x release [2.9.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
* Minor updates.
tensorflow-neuron 2.x release [2.8.9.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
* Added Python 3.10 support.
tensorflow-neuron 2.x release [2.8.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 05/01/2023
* Added support for tracing models larger than 2 GB through the environment variable ``NEURON_CC_FLAGS='--extract-weights INSTANCE_TYPE'`` for all inf1 instance types.
* Neuron release 2.10 release will be the last release that will include support for tensorflow-neuron version 2.7. Future Neuron releases will not include tensorflow-neuron version 2.7.
tensorflow-neuron 2.x release [2.7.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/19/2023
* Minor updates.
tensorflow-neuron 2.x release [2.7.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/28/2023
* Introduce the ``tfn.analyze_model`` function that displays information about the supported and unsupported operators of a traceable model.
* Introduce the ``on_neuron_ratio`` attribute of AWS Optimized Neuron Models returned by ``tfn.trace``, which is the percentage of ops on neuron after compilation.
tensorflow-neuron 2.x release [2.6.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/24/2023
* Minor updates.
tensorflow-neuron 2.x release [2.6.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 2/24/2023
* Minor bug fixes.
tensorflow-neuron 2.x release [2.4.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/22/2022
* Experimental support for tracing models larger than 2 GB through environment variable ``NEURON_CC_FLAGS='--extract-weights'``.
* Introduce ``tfn.auto_multicore`` Python API to enable automatic data parallel on multiple NeuronCores.
* Introduce ``tf-neuron-auto-multicore`` tool to enable automatic data parallel on multiple NeuronCores.
* Deprecated the NEURONCORE_GROUP_SIZES environment variable.
* Minor bug fixes.
tensorflow-neuron 2.x release [2.3.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Added support for Tensorflow 2.8.0.
* Added support for Slice operator
* The graph partitioner now prefers to place less compute intensive operators on CPU if the model already contains a large amount of compute intensive operators.
* Fixed `Github issue #408 <https://github.com/aws/aws-neuron-sdk/issues/408>`_, the fix solves data type handling bug in ``tfn.trace`` when the model contains Conv2D operators.
tensorflow-neuron 2.x release [2.2.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Updated TensorFlow 2.5 to version 2.5.3.
* Added support for TensorFlow 2.6 and 2.7.
* Added a warning message when calling ``tfn.saved_model.compile`` API. In tensorflow-neuron 2.x you should call :ref:`tensorflow.neuron.trace <tensorflow-ref-neuron-tracing-api>`. ``tfn.saved_model.compile`` API supports only partial functionality of :ref:`tensorflow.neuron.trace <tensorflow-ref-neuron-tracing-api>` and will be deprecated in the future.
tensorflow-neuron 2.x release [2.1.14.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/17/2022
* Fixed a bug in TensorFlow Neuron versions 2.1, 2.2. 2.3 and 2.4. The fixed bug was causing a memory leak of 128 bytes for each inference.
* Improved warning message when calling deprecated compilation API under tensorflow-neuron 2.x.
tensorflow-neuron 2.x release [2.1.13.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 02/16/2022
* Fixed a bug that caused a memory leak. The memory leak was approximately 128b for each inference and
exists in all versions of TensorFlow Neuron versions part of Neuron 1.16.0 to Neuron 1.17.0 releases. see :ref:`pre-release-content`
for exact versions included in each release. This release only addresses the leak in TensorFlow Neuron 2.5. Future release of TensorFlow Neuron will fix the leak in other versions as well (2.1, 2.2, 2.3, 2.4).
tensorflow-neuron 2.x release [2.1.6.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
* Updated TensorFlow 2.5 to version 2.5.2.
* Enhanced auto data parallel (e.g. when using NEURONCORE_GROUP_SIZES=X,Y,Z,W) to support edge cases.
* Fixed a bug that may cause tensorflow-neuron to generate in some cases scalar gather instruction with incorrect arguments.
tensorflow-neuron 2.x release [2.0.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
tensorflow-neuron 2.x release [2.0.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 10/27/2021
New in this release
-------------------
* TensorFlow 2.x (``tensorflow-neuron``) now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
* Updated TensorFlow 2.3.x from TensorFlow 2.3.3 to TensorFlow 2.3.4.
* Updated TensorFlow 2.4.x from TensorFlow 2.4.2 to TensorFlow 2.4.3.
* Updated TensorFlow 2.5.x from TensorFlow 2.5.0 to TensorFlow 2.5.1.
Resolved Issues
---------------
* Fix bug that can cause illegal compiler optimizations
* Fix bug that can cause dynamic-shape operators be placed on Neuron
.. _2501680:
tensorflow-neuron 2.x release [1.6.8.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 08/12/2021
New in this release
-------------------
* First release of TensorFlow 2.x integration, Neuron support now TensorFlow versions 2.1.4, 2.2.3, 2.3.3, 2.4.2, and 2.5.0.
* New public API tensorflow.neuron.trace: trace a TensorFlow 2.x keras.Model or a Python callable that can be decorated by tf.function, and return an AWS-Neuron-optimized keras.Model that can execute on AWS Machine Learning Accelerators.
**Please note** that TensorFlow 1.x SavedModel compilation API tensorflow.neuron.saved_model.compile is not supported in tensorflow-neuron 2.x . It continues to function in tensorflow-neuron 1.15.x .
* Included versions:
- tensorflow-neuron-2.5.0.1.6.8.0
- tensorflow-neuron-2.4.2.1.6.8.0
- tensorflow-neuron-2.3.3.1.6.8.0
- tensorflow-neuron-2.2.3.1.6.8.0
- tensorflow-neuron-2.1.4.1.6.8.0
</pre></body></html>
|
2023-09-29T20:54:53.788Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/tensorflow/training.rst.txt
|
```
Training
========
.. include:: training.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Training
========
.. include:: training.txt
</pre></body></html>
|
2023-09-29T20:54:53.861Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/index.rst.txt
|
```
.. _mxnet-neuron-main:
.. _neuron-mxnet:
MXNet Neuron
============
MXNet Neuron unlocks high-performance and cost-effective deep learning acceleration on AWS Trainium-based and Inferentia-based Amazon EC2 instances.
MXNet Neuron enables native MXNet models to be accelerated on Neuron devices, so you can use your existing framework application and get started easily with minimal code changes.
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/mxnet-neuron/mxnet-neuron-setup
.. toctree::
:maxdepth: 1
:hidden:
Inference (Inf1) </frameworks/mxnet-neuron/inference-mxnet-neuron>
.. contents:: Table of contents
:local:
:depth: 2
.. card:: MxNet Neuron(``mxnet-neuron``) for Inference on ``Inf1``
:link: inference-mxnet-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _mxnet-neuron-main:
.. _neuron-mxnet:
MXNet Neuron
============
MXNet Neuron unlocks high-performance and cost-effective deep learning acceleration on AWS Trainium-based and Inferentia-based Amazon EC2 instances.
MXNet Neuron enables native MXNet models to be accelerated on Neuron devices, so you can use your existing framework application and get started easily with minimal code changes.
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/mxnet-neuron/mxnet-neuron-setup
.. toctree::
:maxdepth: 1
:hidden:
Inference (Inf1) </frameworks/mxnet-neuron/inference-mxnet-neuron>
.. contents:: Table of contents
:local:
:depth: 2
.. card:: MxNet Neuron(``mxnet-neuron``) for Inference on ``Inf1``
:link: inference-mxnet-neuron
:link-type: ref
:class-body: sphinx-design-class-title-small
</pre></body></html>
|
2023-09-29T20:54:53.951Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.rst.txt
|
```
Tutorials (``mxnet-neuron``)
=============================
.. toctree::
:maxdepth: 1
:hidden:
Computer Vision Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision>
Natural Language Processing (NLP) Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp>
Utilizing Neuron Capabilities Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities>
.. include:: /frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Tutorials (``mxnet-neuron``)
=============================
.. toctree::
:maxdepth: 1
:hidden:
Computer Vision Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision>
Natural Language Processing (NLP) Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp>
Utilizing Neuron Capabilities Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities>
.. include:: /frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.txt</pre></body></html>
|
2023-09-29T20:54:53.979Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/mxnet-neuron-setup.rst.txt
|
```
.. _mxnet-setup:
MXNet Neuron Setup
==================
.. include:: mxnet-neuron-setup.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _mxnet-setup:
MXNet Neuron Setup
==================
.. include:: mxnet-neuron-setup.txt</pre></body></html>
|
2023-09-29T20:54:53.986Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.rst.txt
|
```
Utilizing Neuron Capabilities Tutorials (``mxnet-neuron``)
==========================================================
* NeuronCore Groups tutorial :ref:`[html] </src/examples/mxnet/resnet50_neuroncore_groups.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50_neuroncore_groups.ipynb>`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Utilizing Neuron Capabilities Tutorials (``mxnet-neuron``)
==========================================================
* NeuronCore Groups tutorial :ref:`[html] </src/examples/mxnet/resnet50_neuroncore_groups.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50_neuroncore_groups.ipynb>`
</pre></body></html>
|
2023-09-29T20:54:53.993Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.rst.txt
|
```
Natural Language Processing (NLP) Tutorials (``mxnet-neuron``)
==============================================================
* MXNet 1.8: Using data parallel mode tutorial :ref:`[html] </src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb>` :mxnet-neuron-src:`[notebook] <data_parallel/data_parallel_tutorial.ipynb>`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Natural Language Processing (NLP) Tutorials (``mxnet-neuron``)
==============================================================
* MXNet 1.8: Using data parallel mode tutorial :ref:`[html] </src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb>` :mxnet-neuron-src:`[notebook] <data_parallel/data_parallel_tutorial.ipynb>`
</pre></body></html>
|
2023-09-29T20:54:53.999Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/inference-mxnet-neuron.rst.txt
|
```
.. _inference-mxnet-neuron:
Inference (mxnet-neuron)
========================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron>
API Reference Guide </frameworks/mxnet-neuron/api-reference-guide>
Developer Guide </frameworks/mxnet-neuron/developer-guide>
Misc </frameworks/mxnet-neuron/misc-mxnet-neuron>
.. include:: inference-mxnet-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _inference-mxnet-neuron:
Inference (mxnet-neuron)
========================
.. toctree::
:maxdepth: 1
:hidden:
Tutorials </frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron>
API Reference Guide </frameworks/mxnet-neuron/api-reference-guide>
Developer Guide </frameworks/mxnet-neuron/developer-guide>
Misc </frameworks/mxnet-neuron/misc-mxnet-neuron>
.. include:: inference-mxnet-neuron.txt</pre></body></html>
|
2023-09-29T20:54:54.047Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/api-compilation-python-api.rst.txt
|
```
.. _ref-mxnet-neuron-compilation-python-api:
Neuron Apache MXNet (Incubating) Compilation Python API
=======================================================
The MXNet-Neuron compilation Python API provides a method to compile
model graph for execution on Inferentia.
Description
-----------
Within the graph or subgraph, the compile method selects and sends
Neuron-supported operations to Neuron-Compiler for compilation and saves
the compiled artifacts in the graph. Uncompilable operations are kept as
original operations for framework execution.
The compiled graph can be saved using the MXNet save_checkpoint and
served using MXNet Model Serving. Please see
:ref:`mxnet-neuron-model-serving` for more information about exporting
to saved model and serving using MXNet Model Serving.
Options can be passed to Neuron compiler via the compile function. For
example, the “\ ``--neuroncore-pipeline-cores``\ ” option directs Neuron compiler
to compile each subgraph to fit in the specified number of NeuronCores.
This number can be less than the total available NeuronCores on an Inf1
instance. See :ref:`neuron-compiler-cli-reference` for more information
about compiler options.
For debugging compilation, use SUBGRAPH_INFO=1 environment setting before
calling the compilation script. The extract subgraphs are preserved as hidden
files in the run directory. For more information, see :ref:`neuron_gatherinfo`
**MXNet 1.5**
-------------
Method
------
.. code:: python
from mxnet.contrib import neuron
neuron.compile(sym, args, aux, inputs, **compile_args)
Arguments
---------
- **sym** - Symbol object loaded from symbol.json file
- **args** - args/params dictionary loaded from params file
- **aux** - aux/params dictionary loaded from params file
- **inputs** - a dictionary with key/value mappings for input name to
input numpy arrays
- **kwargs** (optional) - a dictionary with key/value mappings for
MXNet-Neuron compilation and Neuron Compiler options.
- For example, to limit the number of NeuronCores per subgraph, use
``compile_args={'--neuroncore-pipeline-cores' : N}`` where N is an integer
representing the maximum number of NeuronCores per subgraph.
- Additional compiler flags can be passed using
``'flags' : [<flags>]`` where is a comma separated list of
strings. See :ref:`neuron_gatherinfo` for example of passing debug
flags to compiler.
- Advanced option to exclude node names:
``compile_args={'excl_node_names' : [<node names>]}`` where is a
comma separated list of node name strings.
Returns
-------
- **sym** - new partitioned symbol
- **args** - modified args/params
- **auxs** - modified aux/params
Example Usage: Compilation
--------------------------
The following is an example usage of the compilation, with default
compilation arguments:
.. code:: python
from mxnet.contrib import neuron
...
neuron.compile(sym, args, aux, inputs={'data' : img})
**MXNet 1.8**
-------------
Method
------
.. code:: python
import mx_neuron as neuron
neuron.compile(obj, args=None, aux=None, inputs=None, **compile_args)
Arguments
---------
- **obj** - Symbol object loaded from symbol.json file or gluon.HybridBlock object
- **args** (optional) - args/params dictionary loaded from params file. Only needed in case of Symbol object
- **aux** (optional) - aux/params dictionary loaded from params file. Only needed in case of Symbol object
- **inputs** - a dictionary with key/value mappings for input name to
input numpy arrays.
- **kwargs** (optional) - a dictionary with key/value mappings for
MXNet-Neuron compilation and Neuron Compiler options.
- For example, to limit the number of NeuronCores per subgraph, use
``compile_args={'--neuroncore-pipeline-cores' : N}`` where N is an integer
representing the maximum number of NeuronCores per subgraph.
- Additional compiler flags can be passed using
``'flags' : [<flags>]`` where is a comma separated list of
strings. See :ref:`neuron_gatherinfo` for example of passing debug
flags to compiler.
- Advanced option to exclude node names:
``compile_args={'excl_node_names' : [<node names>]}`` where is a
comma separated list of node name strings.
- work_dir: relative or absolute path for storing compiler artifacts (including params and jsons) generated
during compilation when SUBGRAPH_INFO=1.
Returns
-------
- **(sym, args, auxs)** - for symbol object as input. sym, args and auxs are new partitioned symbol, modified args/params and modified aux/params repectively.
- **(obj)** - for gluon.HybridBlock object as input. obj is the parititioned and optimized gluon.Hybrid block object for Neuron backend.
Example Usage: Compilation
--------------------------
The following is an example usage of the compilation, with default
compilation arguments for symbol object:
.. code:: python
import mx_neuron as neuron
...
neuron.compile(sym, args, aux, inputs={'data' : img})
The following is an example usage of the compilation, with default
compilation arguments for gluon.HybridBlock object (only supported in MXNet-Neuron 1.8):
.. code:: python
import mx_neuron as neuron
...
neuron.compile(obj, inputs={'data' : img})
Example Usage: Extract Compilation Statistics
---------------------------------------------
To extract operation counts, insert the following code after compile
step (assume csym is the compiled MXNet symbol):
.. code:: python
import json
# Return list of nodes from MXNet symbol
def sym_nodes(sym):
return json.loads(sym.tojson())['nodes']
# Return number of operations in node list
def count_ops(graph_nodes):
return len([x['op'] for x in graph_nodes if x['op'] != 'null'])
# Return triplet of compile statistics
# - count of operations in symbol database
# - number of Neuron subgraphs
# - number of operations compiled to Neuron runtime
def get_compile_stats(sym):
cnt = count_ops(sym_nodes(sym))
neuron_subgraph_cnt = 0
neuron_compiled_cnt = 0
for g in sym_nodes(sym):
if g['op'] == '_neuron_subgraph_op':
neuron_subgraph_cnt += 1
for sg in g['subgraphs']:
neuron_compiled_cnt += count_ops(sg['nodes'])
return (cnt, neuron_subgraph_cnt, neuron_compiled_cnt)
original_cnt = count_ops(sym_nodes(sym))
post_compile_cnt, neuron_subgraph_cnt, neuron_compiled_cnt = get_compile_stats(csym)
print("INFO:mxnet: Number of operations in original model: ", original_cnt)
print("INFO:mxnet: Number of operations in compiled model: ", post_compile_cnt)
print("INFO:mxnet: Number of Neuron subgraphs in compiled model: ", neuron_subgraph_cnt)
print("INFO:mxnet: Number of operations placed on Neuron runtime: ", neuron_compiled_cnt)
.. code:: bash
INFO:mxnet: Number of operations in original model: 67
INFO:mxnet: Number of operations in compiled model: 4
INFO:mxnet: Number of Neuron subgraphs in compiled model: 2
INFO:mxnet: Number of operations placed on Neuron runtime: 65
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _ref-mxnet-neuron-compilation-python-api:
Neuron Apache MXNet (Incubating) Compilation Python API
=======================================================
The MXNet-Neuron compilation Python API provides a method to compile
model graph for execution on Inferentia.
Description
-----------
Within the graph or subgraph, the compile method selects and sends
Neuron-supported operations to Neuron-Compiler for compilation and saves
the compiled artifacts in the graph. Uncompilable operations are kept as
original operations for framework execution.
The compiled graph can be saved using the MXNet save_checkpoint and
served using MXNet Model Serving. Please see
:ref:`mxnet-neuron-model-serving` for more information about exporting
to saved model and serving using MXNet Model Serving.
Options can be passed to Neuron compiler via the compile function. For
example, the “\ ``--neuroncore-pipeline-cores``\ ” option directs Neuron compiler
to compile each subgraph to fit in the specified number of NeuronCores.
This number can be less than the total available NeuronCores on an Inf1
instance. See :ref:`neuron-compiler-cli-reference` for more information
about compiler options.
For debugging compilation, use SUBGRAPH_INFO=1 environment setting before
calling the compilation script. The extract subgraphs are preserved as hidden
files in the run directory. For more information, see :ref:`neuron_gatherinfo`
**MXNet 1.5**
-------------
Method
------
.. code:: python
from mxnet.contrib import neuron
neuron.compile(sym, args, aux, inputs, **compile_args)
Arguments
---------
- **sym** - Symbol object loaded from symbol.json file
- **args** - args/params dictionary loaded from params file
- **aux** - aux/params dictionary loaded from params file
- **inputs** - a dictionary with key/value mappings for input name to
input numpy arrays
- **kwargs** (optional) - a dictionary with key/value mappings for
MXNet-Neuron compilation and Neuron Compiler options.
- For example, to limit the number of NeuronCores per subgraph, use
``compile_args={'--neuroncore-pipeline-cores' : N}`` where N is an integer
representing the maximum number of NeuronCores per subgraph.
- Additional compiler flags can be passed using
``'flags' : [<flags>]`` where is a comma separated list of
strings. See :ref:`neuron_gatherinfo` for example of passing debug
flags to compiler.
- Advanced option to exclude node names:
``compile_args={'excl_node_names' : [<node names>]}`` where is a
comma separated list of node name strings.
Returns
-------
- **sym** - new partitioned symbol
- **args** - modified args/params
- **auxs** - modified aux/params
Example Usage: Compilation
--------------------------
The following is an example usage of the compilation, with default
compilation arguments:
.. code:: python
from mxnet.contrib import neuron
...
neuron.compile(sym, args, aux, inputs={'data' : img})
**MXNet 1.8**
-------------
Method
------
.. code:: python
import mx_neuron as neuron
neuron.compile(obj, args=None, aux=None, inputs=None, **compile_args)
Arguments
---------
- **obj** - Symbol object loaded from symbol.json file or gluon.HybridBlock object
- **args** (optional) - args/params dictionary loaded from params file. Only needed in case of Symbol object
- **aux** (optional) - aux/params dictionary loaded from params file. Only needed in case of Symbol object
- **inputs** - a dictionary with key/value mappings for input name to
input numpy arrays.
- **kwargs** (optional) - a dictionary with key/value mappings for
MXNet-Neuron compilation and Neuron Compiler options.
- For example, to limit the number of NeuronCores per subgraph, use
``compile_args={'--neuroncore-pipeline-cores' : N}`` where N is an integer
representing the maximum number of NeuronCores per subgraph.
- Additional compiler flags can be passed using
``'flags' : [<flags>]`` where is a comma separated list of
strings. See :ref:`neuron_gatherinfo` for example of passing debug
flags to compiler.
- Advanced option to exclude node names:
``compile_args={'excl_node_names' : [<node names>]}`` where is a
comma separated list of node name strings.
- work_dir: relative or absolute path for storing compiler artifacts (including params and jsons) generated
during compilation when SUBGRAPH_INFO=1.
Returns
-------
- **(sym, args, auxs)** - for symbol object as input. sym, args and auxs are new partitioned symbol, modified args/params and modified aux/params repectively.
- **(obj)** - for gluon.HybridBlock object as input. obj is the parititioned and optimized gluon.Hybrid block object for Neuron backend.
Example Usage: Compilation
--------------------------
The following is an example usage of the compilation, with default
compilation arguments for symbol object:
.. code:: python
import mx_neuron as neuron
...
neuron.compile(sym, args, aux, inputs={'data' : img})
The following is an example usage of the compilation, with default
compilation arguments for gluon.HybridBlock object (only supported in MXNet-Neuron 1.8):
.. code:: python
import mx_neuron as neuron
...
neuron.compile(obj, inputs={'data' : img})
Example Usage: Extract Compilation Statistics
---------------------------------------------
To extract operation counts, insert the following code after compile
step (assume csym is the compiled MXNet symbol):
.. code:: python
import json
# Return list of nodes from MXNet symbol
def sym_nodes(sym):
return json.loads(sym.tojson())['nodes']
# Return number of operations in node list
def count_ops(graph_nodes):
return len([x['op'] for x in graph_nodes if x['op'] != 'null'])
# Return triplet of compile statistics
# - count of operations in symbol database
# - number of Neuron subgraphs
# - number of operations compiled to Neuron runtime
def get_compile_stats(sym):
cnt = count_ops(sym_nodes(sym))
neuron_subgraph_cnt = 0
neuron_compiled_cnt = 0
for g in sym_nodes(sym):
if g['op'] == '_neuron_subgraph_op':
neuron_subgraph_cnt += 1
for sg in g['subgraphs']:
neuron_compiled_cnt += count_ops(sg['nodes'])
return (cnt, neuron_subgraph_cnt, neuron_compiled_cnt)
original_cnt = count_ops(sym_nodes(sym))
post_compile_cnt, neuron_subgraph_cnt, neuron_compiled_cnt = get_compile_stats(csym)
print("INFO:mxnet: Number of operations in original model: ", original_cnt)
print("INFO:mxnet: Number of operations in compiled model: ", post_compile_cnt)
print("INFO:mxnet: Number of Neuron subgraphs in compiled model: ", neuron_subgraph_cnt)
print("INFO:mxnet: Number of operations placed on Neuron runtime: ", neuron_compiled_cnt)
.. code:: bash
INFO:mxnet: Number of operations in original model: 67
INFO:mxnet: Number of operations in compiled model: 4
INFO:mxnet: Number of Neuron subgraphs in compiled model: 2
INFO:mxnet: Number of operations placed on Neuron runtime: 65
</pre></body></html>
|
2023-09-29T20:54:54.227Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/api-reference-guide.rst.txt
|
```
API Reference Guide (mxnet-neuron)
==================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/mxnet-neuron/api-compilation-python-api
.. include:: /frameworks/mxnet-neuron/api-reference-guide.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide (mxnet-neuron)
==================================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/mxnet-neuron/api-compilation-python-api
.. include:: /frameworks/mxnet-neuron/api-reference-guide.txt</pre></body></html>
|
2023-09-29T20:54:54.404Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/transformers-neuronx/index.rst.txt
|
```
.. _transformers_neuronx_readme:
Transformers Neuron (``transformers-neuronx``)
==============================================
.. toctree::
:maxdepth: 1
:hidden:
Setup </libraries/transformers-neuronx/setup/index>
Developer Guide </libraries/transformers-neuronx/developer-guide>
Tutorials </libraries/transformers-neuronx/transformers-neuronx-tutorials>
Misc </libraries/transformers-neuronx/transformers-neuronx-misc>
.. include:: /libraries/transformers-neuronx/transformers-neuronx.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _transformers_neuronx_readme:
Transformers Neuron (``transformers-neuronx``)
==============================================
.. toctree::
:maxdepth: 1
:hidden:
Setup </libraries/transformers-neuronx/setup/index>
Developer Guide </libraries/transformers-neuronx/developer-guide>
Tutorials </libraries/transformers-neuronx/transformers-neuronx-tutorials>
Misc </libraries/transformers-neuronx/transformers-neuronx-misc>
.. include:: /libraries/transformers-neuronx/transformers-neuronx.txt
</pre></body></html>
|
2023-09-29T20:54:54.432Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.rst.txt
|
```
Computer Vision Tutorials (``mxnet-neuron``)
============================================
* ResNet-50 tutorial :ref:`[html] </src/examples/mxnet/resnet50/resnet50.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50/resnet50.ipynb>`
* Model Serving tutorial :ref:`[html] <mxnet-neuron-model-serving>`
* Getting started with Gluon tutorial :ref:`[html] </src/examples/mxnet/mxnet-gluon-tutorial.ipynb>` :github:`[notebook] </src/examples/mxnet/mxnet-gluon-tutorial.ipynb>`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Computer Vision Tutorials (``mxnet-neuron``)
============================================
* ResNet-50 tutorial :ref:`[html] </src/examples/mxnet/resnet50/resnet50.ipynb>` :mxnet-neuron-src:`[notebook] <resnet50/resnet50.ipynb>`
* Model Serving tutorial :ref:`[html] <mxnet-neuron-model-serving>`
* Getting started with Gluon tutorial :ref:`[html] </src/examples/mxnet/mxnet-gluon-tutorial.ipynb>` :github:`[notebook] </src/examples/mxnet/mxnet-gluon-tutorial.ipynb>`
</pre></body></html>
|
2023-09-29T20:54:54.437Z
|
|
MXNet 1.8: Getting Started with Gluon Tutorial — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/mxnet/mxnet-gluon-tutorial.html
|
# Getting Started with Gluon Tutorial — AWS Neuron Documentation
Toggle in-page Table of Contents
## MXNet 1.8: Getting Started with Gluon Tutorial[#](#MXNet-1.8:-Getting-Started-with-Gluon-Tutorial "Permalink to this headline")
In this tutorial you will compile and deploy resnet-50 using the newly supported MXNet 1.8 and Gluon API on an Inf1 instance. This tutorial is only supported with MXNet 1.8.
This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.
To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at [MXNet Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#install-neuron-mxnet). You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)
## Compile[#](#Compile "Permalink to this headline")
A trained model must be compiled to Inferentia target before it can run on Inferentia. In this step we compile a pre-trained ResNet50 and export it as a compiled MXNet checkpoint.
Compilation will take a few minutes. At the end of compilation, the files resnet-50\_compiled-0000.params and resnet-50\_compiled-symbol.json will be created in local directory.
To check the supported operations for the uncompiled model or information on Neuron subgraphs for the compiled model, please see [Neuron Check Model](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/tutorial-neuron-check-model.html#neuron-check-model).
```
import os
import mxnet as mx
import mx_neuron as neuron
import numpy as np
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')
mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')
block = mx.gluon.nn.SymbolBlock.imports('resnet-50-symbol.json',\
['data', 'softmax_label'], 'resnet-50-0000.params', ctx=mx.cpu())
block.hybridize()
# Compile for Inferentia using Neuron
inputs = { "data" : mx.nd.ones([1,3,224,224], name='data', dtype='float32'), 'softmax_label' : mx.nd.ones([1], name='data', dtype='float32') }
block = neuron.compile(block, inputs=inputs)
#save compiled model
block.export("resnet-50_compiled", 0, block)
```
## Deploy[#](#Deploy "Permalink to this headline")
Deply on Infenrentia to see the inference results as below:
```
probability=0.643591, class=n02123045 tabby, tabby cat
probability=0.184392, class=n02123159 tiger cat
probability=0.105063, class=n02124075 Egyptian cat
probability=0.030101, class=n02127052 lynx, catamount
probability=0.016112, class=n02129604 tiger, Panthera tigris
```
```
import numpy as np
import mxnet as mx
import mx_neuron as neuron
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'synset.txt')
fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')
img = mx.image.imread(fname)# convert into format (batch, RGB, width, height)
img = mx.image.imresize(img, 224, 224) # resize
img = img.transpose((2, 0, 1)) # Channel first
img = img.expand_dims(axis=0) # batchify
img = img.astype(dtype='float32')
block = mx.gluon.nn.SymbolBlock.imports('resnet-50_compiled-symbol.json',\
['data', 'softmax_label'], 'resnet-50_compiled-0000.params', ctx=mx.cpu())
softmax = mx.nd.random_normal(shape=(1,))
out = block(img, softmax).asnumpy()
with open('synset.txt', 'r') as f:
labels = [l.rstrip() for l in f]
out = block(img, softmax).asnumpy()
prob = np.squeeze(out)
a = np.argsort(prob)[::-1]
for i in a[0:5]:
print('probability=%f, class=%s' %(prob[i], labels[i]))
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>MXNet 1.8: Getting Started with Gluon Tutorial — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/mxnet/mxnet-gluon-tutorial", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/mxnet/mxnet-gluon-tutorial.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/mxnet/mxnet-gluon-tutorial.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/src/examples/mxnet/mxnet-gluon-tutorial.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h1 nav-item toc-entry">
<a class="reference internal nav-link" href="#">
MXNet 1.8: Getting Started with Gluon Tutorial
</a>
</li>
<li class="toc-h1 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile">
Compile
</a>
</li>
<li class="toc-h1 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy">
Deploy
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>MXNet 1.8: Getting Started with Gluon Tutorial</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h1 nav-item toc-entry">
<a class="reference internal nav-link" href="#">
MXNet 1.8: Getting Started with Gluon Tutorial
</a>
</li>
<li class="toc-h1 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile">
Compile
</a>
</li>
<li class="toc-h1 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy">
Deploy
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="MXNet-1.8:-Getting-Started-with-Gluon-Tutorial">
<h1>MXNet 1.8: Getting Started with Gluon Tutorial<a class="headerlink" href="#MXNet-1.8:-Getting-Started-with-Gluon-Tutorial" title="Permalink to this headline">#</a></h1>
<p>In this tutorial you will compile and deploy resnet-50 using the newly supported MXNet 1.8 and Gluon API on an Inf1 instance. This tutorial is only supported with MXNet 1.8.</p>
<p>This Jupyter notebook should be run on an inf1.6xlarge instance since you will be loading and compiling several large models.</p>
<p>To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#install-neuron-mxnet">MXNet Setup Guide</a>. You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)</p>
</div>
<div class="section" id="Compile">
<h1>Compile<a class="headerlink" href="#Compile" title="Permalink to this headline">#</a></h1>
<p>A trained model must be compiled to Inferentia target before it can run on Inferentia. In this step we compile a pre-trained ResNet50 and export it as a compiled MXNet checkpoint.</p>
<p>Compilation will take a few minutes. At the end of compilation, the files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory.</p>
<p>To check the supported operations for the uncompiled model or information on Neuron subgraphs for the compiled model, please see <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-tools/tutorial-neuron-check-model.html#neuron-check-model">Neuron Check Model</a>.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span>
<span class="kn">import</span> <span class="nn">mx_neuron</span> <span class="k">as</span> <span class="nn">neuron</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="n">path</span><span class="o">=</span><span class="s1">'http://data.mxnet.io/models/imagenet/'</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-0000.params'</span><span class="p">)</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-symbol.json'</span><span class="p">)</span>
<span class="n">block</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">gluon</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">SymbolBlock</span><span class="o">.</span><span class="n">imports</span><span class="p">(</span><span class="s1">'resnet-50-symbol.json'</span><span class="p">,</span>\
<span class="p">[</span><span class="s1">'data'</span><span class="p">,</span> <span class="s1">'softmax_label'</span><span class="p">],</span> <span class="s1">'resnet-50-0000.params'</span><span class="p">,</span> <span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">())</span>
<span class="n">block</span><span class="o">.</span><span class="n">hybridize</span><span class="p">()</span>
<span class="c1"># Compile for Inferentia using Neuron</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span> <span class="s2">"data"</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">224</span><span class="p">,</span><span class="mi">224</span><span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'data'</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">),</span> <span class="s1">'softmax_label'</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">1</span><span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'data'</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span> <span class="p">}</span>
<span class="n">block</span> <span class="o">=</span> <span class="n">neuron</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">block</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">)</span>
<span class="c1">#save compiled model</span>
<span class="n">block</span><span class="o">.</span><span class="n">export</span><span class="p">(</span><span class="s2">"resnet-50_compiled"</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">block</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>ls
</pre></div>
</div>
</div>
</div>
<div class="section" id="Deploy">
<h1>Deploy<a class="headerlink" href="#Deploy" title="Permalink to this headline">#</a></h1>
<p>Deply on Infenrentia to see the inference results as below:</p>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>probability=0.643591, class=n02123045 tabby, tabby cat
probability=0.184392, class=n02123159 tiger cat
probability=0.105063, class=n02124075 Egyptian cat
probability=0.030101, class=n02127052 lynx, catamount
probability=0.016112, class=n02129604 tiger, Panthera tigris
</pre></div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span>
<span class="kn">import</span> <span class="nn">mx_neuron</span> <span class="k">as</span> <span class="nn">neuron</span>
<span class="n">path</span><span class="o">=</span><span class="s1">'http://data.mxnet.io/models/imagenet/'</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'synset.txt'</span><span class="p">)</span>
<span class="n">fname</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="s1">'https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true'</span><span class="p">)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">imread</span><span class="p">(</span><span class="n">fname</span><span class="p">)</span><span class="c1"># convert into format (batch, RGB, width, height)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">imresize</span><span class="p">(</span><span class="n">img</span><span class="p">,</span> <span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">)</span> <span class="c1"># resize</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">transpose</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span> <span class="c1"># Channel first</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="c1"># batchify</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span>
<span class="n">block</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">gluon</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">SymbolBlock</span><span class="o">.</span><span class="n">imports</span><span class="p">(</span><span class="s1">'resnet-50_compiled-symbol.json'</span><span class="p">,</span>\
<span class="p">[</span><span class="s1">'data'</span><span class="p">,</span> <span class="s1">'softmax_label'</span><span class="p">],</span> <span class="s1">'resnet-50_compiled-0000.params'</span><span class="p">,</span> <span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">())</span>
<span class="n">softmax</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">random_normal</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,))</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">block</span><span class="p">(</span><span class="n">img</span><span class="p">,</span> <span class="n">softmax</span><span class="p">)</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'synset.txt'</span><span class="p">,</span> <span class="s1">'r'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">labels</span> <span class="o">=</span> <span class="p">[</span><span class="n">l</span><span class="o">.</span><span class="n">rstrip</span><span class="p">()</span> <span class="k">for</span> <span class="n">l</span> <span class="ow">in</span> <span class="n">f</span><span class="p">]</span>
<span class="n">out</span> <span class="o">=</span> <span class="n">block</span><span class="p">(</span><span class="n">img</span><span class="p">,</span> <span class="n">softmax</span><span class="p">)</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">squeeze</span><span class="p">(</span><span class="n">out</span><span class="p">)</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">argsort</span><span class="p">(</span><span class="n">prob</span><span class="p">)[::</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">a</span><span class="p">[</span><span class="mi">0</span><span class="p">:</span><span class="mi">5</span><span class="p">]:</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'probability=</span><span class="si">%f</span><span class="s1">, class=</span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span><span class="p">(</span><span class="n">prob</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">labels</span><span class="p">[</span><span class="n">i</span><span class="p">]))</span>
</pre></div>
</div>
</div>
<!-- empty raw cell --></div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:54.545Z
|
Tutorial: Neuron Apache MXNet (Incubating) Model Serving — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/mxnet-neuron/tutorials/tutorial-model-serving.html#mxnet-neuron-model-serving
|
# Neuron Apache MXNet (Incubating) Model Serving — AWS Neuron Documentation
_This document is relevant for_: `Inf1`
## Tutorial: Neuron Apache MXNet (Incubating) Model Serving[#](#tutorial-neuron-apache-mxnet-incubating-model-serving "Permalink to this headline")
This MXNet Neuron Model Serving (MMS) example is adapted from the MXNet vision service example which uses pretrained squeezenet to perform image classification: [https://github.com/awslabs/multi-model-server/tree/master/examples/mxnet\_vision](https://github.com/awslabs/multi-model-server/tree/master/examples/mxnet_vision).
Before starting this example, please ensure that Neuron-optimized MXNet version mxnet-neuron is installed along with Neuron Compiler.
## Warning[#](#warning "Permalink to this headline")
If you are using MXNet-1.5, please note that MXNet-1.5 entered maintenance mode and require Neuron Runtime 1.x, please see [10/27/2021 - Neuron support for Apache MXNet 1.5 enters maintenance mode](../../../general/announcements/neuron1.x/announcements.html#maintenance-mxnet-1-5). To setup development environment for MXNet-1.5 see installation instructions at [MXNet Neuron Setup](../mxnet-neuron-setup.html#mxnet-setup).
If using DLAMI, you can activate the environment aws\_neuron\_mxnet\_p36 and skip the installation part in the first step below.
1. First, install Java runtime and multi-model-server:
```
cd ~/
# sudo yum -y install -q jre # for AML2
sudo apt-get install -y -q default-jre # for Ubuntu
pip install multi-model-server
```
Download the example code:
```
git clone https://github.com/awslabs/multi-model-server
cd ~/multi-model-server/examples/mxnet_vision
```
2. Compile ResNet50 model to Inferentia target by saving the following Python script to compile\_resnet50.py and run “`python compile_resnet50.py`”
```
from packaging import version
import numpy as np
import mxnet as mx
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
else:
from mxnet.contrib import neuron
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')
mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')
mx.test_utils.download(path+'synset.txt')
nn_name = "resnet-50"
#Load a model
sym, args, auxs = mx.model.load_checkpoint(nn_name, 0)
#Define compilation parameters
# - input shape and dtype
inputs = {'data' : mx.nd.zeros([1,3,224,224], dtype='float32') }
# compile graph to inferentia target
csym, cargs, cauxs = neuron.compile(sym, args, auxs, inputs)
# save compiled model
mx.model.save_checkpoint(nn_name + "_compiled", 0, csym, cargs, cauxs)
```
3. Prepare signature file `signature.json` to configure the input name and shape:
```
{
"inputs": [
{
"data_name": "data",
"data_shape": [
1,
3,
224,
224
]
}
]
}
```
4. Prepare `synset.txt` which is a list of names for ImageNet prediction classes:
```
curl -O https://s3.amazonaws.com/model-server/model_archive_1.0/examples/squeezenet_v1.1/synset.txt
```
5. Create custom service class following template in model\_server\_template folder:
```
cp -r ../model_service_template/* .
```
Edit `mxnet_model_service.py` to use the appropriate context.
Make the following change:
```
from packaging import version
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
self.mxnet_ctx = mx.neuron()
```
Comment out the existing context set:
```
#self.mxnet_ctx = mx.cpu() if gpu_id is None else mx.gpu(gpu_id)
```
Also, comment out unnecessary data copy for model\_input in `mxnet_model_service.py`:
```
#model_input = [item.as_in_context(self.mxnet_ctx) for item in model_input]
```
6. Package the model with model-archiver:
```
cd ~/multi-model-server/examples
model-archiver --force --model-name resnet-50_compiled --model-path mxnet_vision --handler mxnet_vision_service:handle
```
7. Start MXNet Model Server (MMS) and load model using RESTful API. Please ensure that Neuron RTD is running with default settings (see rtd-getting-started):
```
cd ~/multi-model-server/
multi-model-server --start --model-store examples
# Pipe to log file if you want to keep a log of MMS
curl -v -X POST "http://localhost:8081/models?initial_workers=1&max_workers=1&synchronous=true&url=resnet-50_compiled.mar"
sleep 10 # allow sufficient time to load model
```
Each worker requires a NeuronCore group that can accommodate the compiled model. Additional workers can be added by increasing max\_workers configuration as long as there are enough NeuronCores available. Use `neuron-top` to see which models are loaded on specific NeuronCores.
8. Test inference using an example image:
```
curl -O https://raw.githubusercontent.com/awslabs/multi-model-server/master/docs/images/kitten_small.jpg
curl -X POST http://127.0.0.1:8080/predictions/resnet-50_compiled -T kitten_small.jpg
```
You will see the following output:
```
[
{
"probability": 0.6375716328620911,
"class": "n02123045 tabby, tabby cat"
},
{
"probability": 0.1692783385515213,
"class": "n02123159 tiger cat"
},
{
"probability": 0.12187337130308151,
"class": "n02124075 Egyptian cat"
},
{
"probability": 0.028840631246566772,
"class": "n02127052 lynx, catamount"
},
{
"probability": 0.019691042602062225,
"class": "n02129604 tiger, Panthera tigris"
}
]
```
9. To cleanup after test, issue a delete command via RESTful API and stop the model server:
```
curl -X DELETE http://127.0.0.1:8081/models/resnet-50_compiled
multi-model-server --stop
```
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Tutorial: Neuron Apache MXNet (Incubating) Model Serving — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/mxnet-neuron/tutorials/tutorial-model-serving", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/mxnet-neuron/tutorials/tutorial-model-serving.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/mxnet-neuron/tutorials/tutorial-model-serving.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/frameworks/mxnet-neuron/tutorials/tutorial-model-serving.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#warning">
Warning
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Tutorial: Neuron Apache MXNet (Incubating) Model Serving</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#warning">
Warning
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="tutorial-neuron-apache-mxnet-incubating-model-serving">
<span id="mxnet-neuron-model-serving"></span><h1>Tutorial: Neuron Apache MXNet (Incubating) Model Serving<a class="headerlink" href="#tutorial-neuron-apache-mxnet-incubating-model-serving" title="Permalink to this headline">#</a></h1>
<p>This MXNet Neuron Model Serving (MMS) example is adapted from the MXNet
vision service example which uses pretrained squeezenet to perform image
classification:
<a class="reference external" href="https://github.com/awslabs/multi-model-server/tree/master/examples/mxnet_vision">https://github.com/awslabs/multi-model-server/tree/master/examples/mxnet_vision</a>.</p>
<p>Before starting this example, please ensure that Neuron-optimized MXNet
version mxnet-neuron is installed along with Neuron Compiler.</p>
<div class="section" id="warning">
<h2>Warning<a class="headerlink" href="#warning" title="Permalink to this headline">#</a></h2>
<p>If you are using MXNet-1.5, please note that MXNet-1.5 entered maintenance mode and require Neuron Runtime 1.x, please see <a class="reference internal" href="../../../general/announcements/neuron1.x/announcements.html#maintenance-mxnet-1-5"><span class="std std-ref">10/27/2021 - Neuron support for Apache MXNet 1.5 enters maintenance mode</span></a>.
To setup development environment for MXNet-1.5 see installation instructions at <a class="reference internal" href="../mxnet-neuron-setup.html#mxnet-setup"><span class="std std-ref">MXNet Neuron Setup</span></a>.</p>
<p>If using DLAMI, you can activate the environment aws_neuron_mxnet_p36
and skip the installation part in the first step below.</p>
<ol class="arabic simple">
<li><p>First, install Java runtime and multi-model-server:</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">cd</span><span class="w"> </span>~/
<span class="c1"># sudo yum -y install -q jre # for AML2</span>
sudo<span class="w"> </span>apt-get<span class="w"> </span>install<span class="w"> </span>-y<span class="w"> </span>-q<span class="w"> </span>default-jre<span class="w"> </span><span class="c1"># for Ubuntu</span>
pip<span class="w"> </span>install<span class="w"> </span>multi-model-server
</pre></div>
</div>
<p>Download the example code:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>git<span class="w"> </span>clone<span class="w"> </span>https://github.com/awslabs/multi-model-server
<span class="nb">cd</span><span class="w"> </span>~/multi-model-server/examples/mxnet_vision
</pre></div>
</div>
<ol class="arabic simple" start="2">
<li><p>Compile ResNet50 model to Inferentia target by saving the following
Python script to compile_resnet50.py and run
“<code class="docutils literal notranslate"><span class="pre">python</span> <span class="pre">compile_resnet50.py</span></code>”</p></li>
</ol>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">packaging</span> <span class="kn">import</span> <span class="n">version</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span>
<span class="n">mxnet_version</span> <span class="o">=</span> <span class="n">version</span><span class="o">.</span><span class="n">parse</span><span class="p">(</span><span class="n">mx</span><span class="o">.</span><span class="n">__version__</span><span class="p">)</span>
<span class="k">if</span> <span class="n">mxnet_version</span> <span class="o">>=</span> <span class="n">version</span><span class="o">.</span><span class="n">parse</span><span class="p">(</span><span class="s2">"1.8"</span><span class="p">):</span>
<span class="kn">import</span> <span class="nn">mx_neuron</span> <span class="k">as</span> <span class="nn">neuron</span>
<span class="k">else</span><span class="p">:</span>
<span class="kn">from</span> <span class="nn">mxnet.contrib</span> <span class="kn">import</span> <span class="n">neuron</span>
<span class="n">path</span><span class="o">=</span><span class="s1">'http://data.mxnet.io/models/imagenet/'</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-0000.params'</span><span class="p">)</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-symbol.json'</span><span class="p">)</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'synset.txt'</span><span class="p">)</span>
<span class="n">nn_name</span> <span class="o">=</span> <span class="s2">"resnet-50"</span>
<span class="c1">#Load a model</span>
<span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">auxs</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">load_checkpoint</span><span class="p">(</span><span class="n">nn_name</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="c1">#Define compilation parameters</span>
<span class="c1"># - input shape and dtype</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'data'</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">zeros</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">224</span><span class="p">,</span><span class="mi">224</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span> <span class="p">}</span>
<span class="c1"># compile graph to inferentia target</span>
<span class="n">csym</span><span class="p">,</span> <span class="n">cargs</span><span class="p">,</span> <span class="n">cauxs</span> <span class="o">=</span> <span class="n">neuron</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">auxs</span><span class="p">,</span> <span class="n">inputs</span><span class="p">)</span>
<span class="c1"># save compiled model</span>
<span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">save_checkpoint</span><span class="p">(</span><span class="n">nn_name</span> <span class="o">+</span> <span class="s2">"_compiled"</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">csym</span><span class="p">,</span> <span class="n">cargs</span><span class="p">,</span> <span class="n">cauxs</span><span class="p">)</span>
</pre></div>
</div>
<ol class="arabic simple" start="3">
<li><p>Prepare signature file <code class="docutils literal notranslate"><span class="pre">signature.json</span></code> to configure the input name
and shape:</p></li>
</ol>
<div class="highlight-json notranslate"><div class="highlight"><pre><span></span><span class="p">{</span>
<span class="w"> </span><span class="nt">"inputs"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="p">{</span>
<span class="w"> </span><span class="nt">"data_name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"data"</span><span class="p">,</span>
<span class="w"> </span><span class="nt">"data_shape"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span>
<span class="w"> </span><span class="mi">1</span><span class="p">,</span>
<span class="w"> </span><span class="mi">3</span><span class="p">,</span>
<span class="w"> </span><span class="mi">224</span><span class="p">,</span>
<span class="w"> </span><span class="mi">224</span>
<span class="w"> </span><span class="p">]</span>
<span class="w"> </span><span class="p">}</span>
<span class="w"> </span><span class="p">]</span>
<span class="p">}</span>
</pre></div>
</div>
<ol class="arabic simple" start="4">
<li><p>Prepare <code class="docutils literal notranslate"><span class="pre">synset.txt</span></code> which is a list of names for ImageNet
prediction classes:</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>curl<span class="w"> </span>-O<span class="w"> </span>https://s3.amazonaws.com/model-server/model_archive_1.0/examples/squeezenet_v1.1/synset.txt
</pre></div>
</div>
<ol class="arabic simple" start="5">
<li><p>Create custom service class following template in
model_server_template folder:</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>cp<span class="w"> </span>-r<span class="w"> </span>../model_service_template/*<span class="w"> </span>.
</pre></div>
</div>
<p>Edit <code class="docutils literal notranslate"><span class="pre">mxnet_model_service.py</span></code> to use the appropriate context.</p>
<p>Make the following change:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>from<span class="w"> </span>packaging<span class="w"> </span>import<span class="w"> </span>version
<span class="nv">mxnet_version</span><span class="w"> </span><span class="o">=</span><span class="w"> </span>version.parse<span class="o">(</span>mx.__version__<span class="o">)</span>
<span class="k">if</span><span class="w"> </span>mxnet_version<span class="w"> </span>><span class="o">=</span><span class="w"> </span>version.parse<span class="o">(</span><span class="s2">"1.8"</span><span class="o">)</span>:
<span class="w"> </span>import<span class="w"> </span>mx_neuron<span class="w"> </span>as<span class="w"> </span>neuron
self.mxnet_ctx<span class="w"> </span><span class="o">=</span><span class="w"> </span>mx.neuron<span class="o">()</span>
</pre></div>
</div>
<p>Comment out the existing context set:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="c1">#self.mxnet_ctx = mx.cpu() if gpu_id is None else mx.gpu(gpu_id)</span>
</pre></div>
</div>
<p>Also, comment out unnecessary data copy for model_input in
<code class="docutils literal notranslate"><span class="pre">mxnet_model_service.py</span></code>:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="c1">#model_input = [item.as_in_context(self.mxnet_ctx) for item in model_input]</span>
</pre></div>
</div>
<ol class="arabic simple" start="6">
<li><p>Package the model with model-archiver:</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">cd</span><span class="w"> </span>~/multi-model-server/examples
model-archiver<span class="w"> </span>--force<span class="w"> </span>--model-name<span class="w"> </span>resnet-50_compiled<span class="w"> </span>--model-path<span class="w"> </span>mxnet_vision<span class="w"> </span>--handler<span class="w"> </span>mxnet_vision_service:handle
</pre></div>
</div>
<ol class="arabic simple" start="7">
<li><p>Start MXNet Model Server (MMS) and load model using RESTful API.
Please ensure that Neuron RTD is running with default settings (see
<span class="xref std std-ref">rtd-getting-started</span>):</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">cd</span><span class="w"> </span>~/multi-model-server/
multi-model-server<span class="w"> </span>--start<span class="w"> </span>--model-store<span class="w"> </span>examples
<span class="c1"># Pipe to log file if you want to keep a log of MMS</span>
curl<span class="w"> </span>-v<span class="w"> </span>-X<span class="w"> </span>POST<span class="w"> </span><span class="s2">"http://localhost:8081/models?initial_workers=1&max_workers=1&synchronous=true&url=resnet-50_compiled.mar"</span>
sleep<span class="w"> </span><span class="m">10</span><span class="w"> </span><span class="c1"># allow sufficient time to load model</span>
</pre></div>
</div>
<p>Each worker requires a NeuronCore group that can accommodate the compiled
model. Additional workers can be added by increasing max_workers
configuration as long as there are enough NeuronCores available. Use
<code class="docutils literal notranslate"><span class="pre">neuron-top</span></code> to see which models are loaded on specific NeuronCores.</p>
<ol class="arabic simple" start="8">
<li><p>Test inference using an example image:</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>curl<span class="w"> </span>-O<span class="w"> </span>https://raw.githubusercontent.com/awslabs/multi-model-server/master/docs/images/kitten_small.jpg
curl<span class="w"> </span>-X<span class="w"> </span>POST<span class="w"> </span>http://127.0.0.1:8080/predictions/resnet-50_compiled<span class="w"> </span>-T<span class="w"> </span>kitten_small.jpg
</pre></div>
</div>
<p>You will see the following output:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="o">[</span>
<span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="s2">"probability"</span>:<span class="w"> </span><span class="m">0</span>.6375716328620911,
<span class="w"> </span><span class="s2">"class"</span>:<span class="w"> </span><span class="s2">"n02123045 tabby, tabby cat"</span>
<span class="w"> </span><span class="o">}</span>,
<span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="s2">"probability"</span>:<span class="w"> </span><span class="m">0</span>.1692783385515213,
<span class="w"> </span><span class="s2">"class"</span>:<span class="w"> </span><span class="s2">"n02123159 tiger cat"</span>
<span class="w"> </span><span class="o">}</span>,
<span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="s2">"probability"</span>:<span class="w"> </span><span class="m">0</span>.12187337130308151,
<span class="w"> </span><span class="s2">"class"</span>:<span class="w"> </span><span class="s2">"n02124075 Egyptian cat"</span>
<span class="w"> </span><span class="o">}</span>,
<span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="s2">"probability"</span>:<span class="w"> </span><span class="m">0</span>.028840631246566772,
<span class="w"> </span><span class="s2">"class"</span>:<span class="w"> </span><span class="s2">"n02127052 lynx, catamount"</span>
<span class="w"> </span><span class="o">}</span>,
<span class="w"> </span><span class="o">{</span>
<span class="w"> </span><span class="s2">"probability"</span>:<span class="w"> </span><span class="m">0</span>.019691042602062225,
<span class="w"> </span><span class="s2">"class"</span>:<span class="w"> </span><span class="s2">"n02129604 tiger, Panthera tigris"</span>
<span class="w"> </span><span class="o">}</span>
<span class="o">]</span>
</pre></div>
</div>
<ol class="arabic simple" start="9">
<li><p>To cleanup after test, issue a delete command via RESTful API and
stop the model server:</p></li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>curl<span class="w"> </span>-X<span class="w"> </span>DELETE<span class="w"> </span>http://127.0.0.1:8081/models/resnet-50_compiled
multi-model-server<span class="w"> </span>--stop
</pre></div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:54.653Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/troubleshooting-guide.rst.txt
|
```
.. _mxnet_troubleshooting_guide:
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. contents:: Table of Contents
:local:
:depth: 2
Inference Runtime Error
=======================
Out-of-memory error when calling Symbol API bind() too many times
-----------------------------------------------------------------
.. important ::
``NEURONCORE_GROUP_SIZES`` will no longer be supported starting Neuron 1.19.0 release if your application is using ``NEURONCORE_GROUP_SIZES`` please
see :ref:`neuron-migrating-apps-neuron-to-libnrt` and :ref:`eol-ncgs-env_2` for more details.
If you see out-of-memory error when using Symbol API's bind() function, please ensure that the bind() function is
called once for each desired model instance. For example, on inf1.xlarge, use Symbol API to create 4 parallel
instances of a model that was compiled to 1 NeuronCore (--neuroncore-pipeline-cores=1), each is bound to an
different mx.neuron(i) context where i is the NeuronCore Group index ranging from 0 to 3. Then use 4 threads to feed
the 4 instances in parallel. For example:
.. code:: python
NUM_PARALLEL = 4
os.environ['NEURONCORE_GROUP_SIZES'] = ','.join('1' for _ in range(NUM_PARALLEL))
data_iter = []
for i in range(NUM_PARALLEL):
data_iter.append(mx.io.ImageRecordIter(
path_imgrec=recfile_base, data_shape=(3, 224, 224), batch_size=1,
prefetch_buffer=1,
num_parts=NUM_PARALLEL, part_index=i))
sym, args, auxs = mx.model.load_checkpoint('resnet-50_compiled', 0)
exec_list = []
for i in range(NUM_PARALLEL):
exec = sym.bind(ctx=mx.neuron(i), args=args, aux_states=auxs, grad_req='null')
exec_list.append(exec)
def single_thread_infer(i):
for batch in data_iter[i]:
img = batch.data[0]
label = batch.label
feed_dict = {'data': img}
exe = exec_list[i]
exe.copy_params_from(feed_dict)
exe.forward()
out = exe.outputs[0]
future_list = []
with futures.ThreadPoolExecutor(max_workers=NUM_PARALLEL) as executor:
for i in range(NUM_PARALLEL):
future_list.append(executor.submit(single_thread_infer, i))
Inference crashed with MXNetError: InferShapeKeyword argument name xyz not found
--------------------------------------------------------------------------------
If you see MXNetError:
.. code:: bash
mxnet.base.MXNetError: [11:55:39] src/c_api/c_api_symbolic.cc:508: InferShapeKeyword argument name xyz not found."
This is followed by a list of "Candidate arguments". This list shows all the input argument names that the model knows about, and 'xyz' is not in the list. To fix this, remove entry xyz from the feed dictionary.
Inference crashed at mx.nd.waitall() with MXNetError: Check failed: bin.dtype() == mshadow::kUint8
--------------------------------------------------------------------------------------------------
When executing Symbol API's forward function followed by mx.nd.waitall(), where MXNetError exception occurs with 'Check failed: bin.dtype() == mshadow::kUint8'.
Inference crashed with NRTD error 1002
--------------------------------------
During inference, the user may encounter an error with details "[NRTD:infer_wait] error: 1002":
.. code:: bash
mxnet.base.MXNetError: [11:26:56] src/operator/subgraph/neuron/./neuron_util.h:1175: Check failed: rsp_wait.status().code() == 0 || rsp_wait.status().code() == 1003: Failed
Infer Wait with Neuron-RTD Error. Neuron-RTD Status Code: 1002, details: "[NRTD:infer_wait] error: 1002
"
Runtime errors are listed in :ref:`rtd-return-codes`. In particular, 1002 means that some invalid input has been submitted to infer, e.g. missing some of the input tensors, incorrect input tensor sizes. Please examine /var/log/syslog to see imore details on the error. For example, you may see:
.. code::
Oct 30 19:13:39 ip-172-31-93-131 nrtd[1125]: [TDRV:io_queue_prepare_input_nonhugetlb] Unexpected input size, for data00, expected: 2097152, received: 33554432
This means that the input tensor size is larger than what the model was compiled for (i.e. the example input tensor shapes passed during compilation.
Multi-Model Server
==================
Failed to create NEURONCORE Group with GRPC Error. Status Error: 14, Error message: "Connect Failed"
----------------------------------------------------------------------------------------------------
NOTE: This error only applies to MXNet 1.5.
If the client is unable to start workers and you get a message that MMS is unable to create NeuronCore Group,
please check that Neuron RTD is running (neuron-rtd process).
.. code:: json
{
"code": 500,
"type": "InternalServerException",
"message": "Failed to start workers“
}
.. code:: bash
2019-10-23 19:56:23,187 [INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - [19:56:23] src/operator/subgraph/inferentia/./inferentia_util.h:218: Check failed: status.ok() Failed to create NeuronCore Group with GRPC Error. Status Error: 14, Error message: "Connect Failed"
Multiple MMS workers die with “Backend worker process die.” message
-------------------------------------------------------------------
.. important ::
``NEURONCORE_GROUP_SIZES`` will no longer be supported starting Neuron 1.19.0 release if your application is using ``NEURONCORE_GROUP_SIZES`` please
see :ref:`neuron-migrating-apps-neuron-to-libnrt` and :ref:`eol-ncgs-env_2` for more details.
If you run inference with MMS and get multiple messages “Backend worker process die", please ensure that the number of workers ("intial_workers") passed during load model is less than or equal to number of NeuronCores available divided by number of NeuronCores required by model.
.. code:: bash
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Backend worker process die.
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/usr/local/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1524, in simple_bind
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ctypes.byref(exe_handle)))
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/usr/local/lib/python3.6/site-packages/mxnet/base.py", line 252, in check_call
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise MXNetError(py_str(_LIB.MXGetLastError()))
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mxnet.base.MXNetError: [00:26:32] src/operator/subgraph/neuron/./neuron_util.h:221: Check failed: 0 == create_eg_rsp.status().code() Failed to create NeuronCore Group with KRTD Error. KRTD Status Code: 4, details: ""
As indicated in :ref:`appnote-performance-tuning`, for greater flexibility user can use NEURONCORE_GROUP_SIZES to specify the groupings of NeuronCores into Neuron devices, each device consisting of one or more NeuronCores. Each worker would take a device. The total number of NeuronCores taken by all the workers should be less than or equal the total number of NeuronCores visible to neuron-rtd. This situation should be considered at full load (MMS scales up to max_workers). Additionally, to properly assign model to Neuron device, the environment NEURONCORE_GROUP_SIZES must be specified within the model server class (ie. mxnet_model_service.py in the example above). For example, add the following line within mxnet_model_service.py for model compiled to 1 NeuronCore:
.. code:: python
os.environ['NEURONCORE_GROUP_SIZES'] = '1'
More information about max_worker limit setting can be found at `MMS Management API Documentation`_. For example, to run up to 4 workers in inf1.xlarge where 4 NeuronCores are available by default to Neuron-RTD, set max_workers to 4:
.. _MMS Management API Documentation: https://github.com/awslabs/multi-model-server/blob/master/docs/management_api.md#user-content-scale-workers
.. code:: bash
curl -v -X PUT "http://localhost:8081/models/squeezenet_v1.1_compiled?min_worker=1?max_worker=4"
MMS throws a "mxnet.base.MXNetError: array::at" error
-----------------------------------------------------
If you see “mxnet.base.MXNetError: array::at” when running MMS please check that NDArray/Gluon API is not used as they are not supported in MXNet-Neuron.
If you would like to use NDArray or Gluon API, please upgrade to MXNet 1.8.
.. code:: bash
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - array::at
[INFO ] W-9000-squeezenet_v1.1_compiled com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 30
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/tmp/models/6606fa046f68a34df87f15362a7a2d9a49749878/model_handler.py", line 82, in handle
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - data = self.inference(data)
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/tmp/models/6606fa046f68a34df87f15362a7a2d9a49749878/mxnet_model_service.py", line 153, in inference
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - d.wait_to_read()
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/home/user/regression_venv_p3.6/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 1819, in wait_to_read
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - check_call(_LIB.MXNDArrayWaitToRead(self.handle))
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/home/user/regression_venv_p3.6/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise MXNetError(py_str(_LIB.MXGetLastError()))
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mxnet.base.MXNetError: array::at
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Invoking custom service failed.
MXNet Model Server is not able to clean up Neuron RTD states after model is unloaded
------------------------------------------------------------------------------------
NOTE: This issue is resolved in version 1.5.1.1.1.88.0 released 11/17/2020 and only applies for MXNet 1.5.
MXNet Model Server is not able to clean up Neuron RTD states after model is unloaded (deleted) from model server. Restarting the model server may fail with "Failed to create NEURONCORE_GROUP" error:
.. code:: bash
mxnet.base.MXNetError: [00:26:59] src/operator/subgraph/neuron/./neuron_util.h:348: Check failed: 0 == create_eg_rsp.status().code(): Failed to create NEURONCORE_GROUP with Neuron-RTD Error. Neuron-RTD Status Code: 9, details: ""
The workaround is to run “`/opt/aws/neuron/bin/neuron-cli reset`“ to clear Neuron RTD states after all models are unloaded and server is shut down before restarting the model server.
Pipeline mode is not able to execute inferences requests in parallel
--------------------------------------------------------------------
If you see that multiple executors in a neuron pipeline setup (one model compiled for more than one neuron-cores using `--neuroncore-pipeline-cores` option during compilation) are not running in parallel, please set the following MXNet's environment variables before inference to allow mxnet to execute the CPU ops in parallel. Otherwise it will be sequential and stall the executors.
``MXNET_CPU_WORKER_NTHREADS`` is used to do that. Setting its value to ``__subgraph_opt_neuroncore__`` in the compiled model json will ensure that all the executors (threads) can be run in parallel.
Features only in MXNet-Neuron 1.5
---------------------------------
- Shared memory for IFMaps transfer to neuron runtime (has higher performance compared to GRPC mode)
- Neuron profiling using MXNet
Features only in MXNet-Neuron 1.8
---------------------------------
- Gluon API support
- Library mode neuron runtime
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _mxnet_troubleshooting_guide:
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. contents:: Table of Contents
:local:
:depth: 2
Inference Runtime Error
=======================
Out-of-memory error when calling Symbol API bind() too many times
-----------------------------------------------------------------
.. important ::
``NEURONCORE_GROUP_SIZES`` will no longer be supported starting Neuron 1.19.0 release if your application is using ``NEURONCORE_GROUP_SIZES`` please
see :ref:`neuron-migrating-apps-neuron-to-libnrt` and :ref:`eol-ncgs-env_2` for more details.
If you see out-of-memory error when using Symbol API's bind() function, please ensure that the bind() function is
called once for each desired model instance. For example, on inf1.xlarge, use Symbol API to create 4 parallel
instances of a model that was compiled to 1 NeuronCore (--neuroncore-pipeline-cores=1), each is bound to an
different mx.neuron(i) context where i is the NeuronCore Group index ranging from 0 to 3. Then use 4 threads to feed
the 4 instances in parallel. For example:
.. code:: python
NUM_PARALLEL = 4
os.environ['NEURONCORE_GROUP_SIZES'] = ','.join('1' for _ in range(NUM_PARALLEL))
data_iter = []
for i in range(NUM_PARALLEL):
data_iter.append(mx.io.ImageRecordIter(
path_imgrec=recfile_base, data_shape=(3, 224, 224), batch_size=1,
prefetch_buffer=1,
num_parts=NUM_PARALLEL, part_index=i))
sym, args, auxs = mx.model.load_checkpoint('resnet-50_compiled', 0)
exec_list = []
for i in range(NUM_PARALLEL):
exec = sym.bind(ctx=mx.neuron(i), args=args, aux_states=auxs, grad_req='null')
exec_list.append(exec)
def single_thread_infer(i):
for batch in data_iter[i]:
img = batch.data[0]
label = batch.label
feed_dict = {'data': img}
exe = exec_list[i]
exe.copy_params_from(feed_dict)
exe.forward()
out = exe.outputs[0]
future_list = []
with futures.ThreadPoolExecutor(max_workers=NUM_PARALLEL) as executor:
for i in range(NUM_PARALLEL):
future_list.append(executor.submit(single_thread_infer, i))
Inference crashed with MXNetError: InferShapeKeyword argument name xyz not found
--------------------------------------------------------------------------------
If you see MXNetError:
.. code:: bash
mxnet.base.MXNetError: [11:55:39] src/c_api/c_api_symbolic.cc:508: InferShapeKeyword argument name xyz not found."
This is followed by a list of "Candidate arguments". This list shows all the input argument names that the model knows about, and 'xyz' is not in the list. To fix this, remove entry xyz from the feed dictionary.
Inference crashed at mx.nd.waitall() with MXNetError: Check failed: bin.dtype() == mshadow::kUint8
--------------------------------------------------------------------------------------------------
When executing Symbol API's forward function followed by mx.nd.waitall(), where MXNetError exception occurs with 'Check failed: bin.dtype() == mshadow::kUint8'.
Inference crashed with NRTD error 1002
--------------------------------------
During inference, the user may encounter an error with details "[NRTD:infer_wait] error: 1002":
.. code:: bash
mxnet.base.MXNetError: [11:26:56] src/operator/subgraph/neuron/./neuron_util.h:1175: Check failed: rsp_wait.status().code() == 0 || rsp_wait.status().code() == 1003: Failed
Infer Wait with Neuron-RTD Error. Neuron-RTD Status Code: 1002, details: "[NRTD:infer_wait] error: 1002
"
Runtime errors are listed in :ref:`rtd-return-codes`. In particular, 1002 means that some invalid input has been submitted to infer, e.g. missing some of the input tensors, incorrect input tensor sizes. Please examine /var/log/syslog to see imore details on the error. For example, you may see:
.. code::
Oct 30 19:13:39 ip-172-31-93-131 nrtd[1125]: [TDRV:io_queue_prepare_input_nonhugetlb] Unexpected input size, for data00, expected: 2097152, received: 33554432
This means that the input tensor size is larger than what the model was compiled for (i.e. the example input tensor shapes passed during compilation.
Multi-Model Server
==================
Failed to create NEURONCORE Group with GRPC Error. Status Error: 14, Error message: "Connect Failed"
----------------------------------------------------------------------------------------------------
NOTE: This error only applies to MXNet 1.5.
If the client is unable to start workers and you get a message that MMS is unable to create NeuronCore Group,
please check that Neuron RTD is running (neuron-rtd process).
.. code:: json
{
"code": 500,
"type": "InternalServerException",
"message": "Failed to start workers“
}
.. code:: bash
2019-10-23 19:56:23,187 [INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - [19:56:23] src/operator/subgraph/inferentia/./inferentia_util.h:218: Check failed: status.ok() Failed to create NeuronCore Group with GRPC Error. Status Error: 14, Error message: "Connect Failed"
Multiple MMS workers die with “Backend worker process die.” message
-------------------------------------------------------------------
.. important ::
``NEURONCORE_GROUP_SIZES`` will no longer be supported starting Neuron 1.19.0 release if your application is using ``NEURONCORE_GROUP_SIZES`` please
see :ref:`neuron-migrating-apps-neuron-to-libnrt` and :ref:`eol-ncgs-env_2` for more details.
If you run inference with MMS and get multiple messages “Backend worker process die", please ensure that the number of workers ("intial_workers") passed during load model is less than or equal to number of NeuronCores available divided by number of NeuronCores required by model.
.. code:: bash
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Backend worker process die.
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/usr/local/lib/python3.6/site-packages/mxnet/symbol/symbol.py", line 1524, in simple_bind
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - ctypes.byref(exe_handle)))
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/usr/local/lib/python3.6/site-packages/mxnet/base.py", line 252, in check_call
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise MXNetError(py_str(_LIB.MXGetLastError()))
com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mxnet.base.MXNetError: [00:26:32] src/operator/subgraph/neuron/./neuron_util.h:221: Check failed: 0 == create_eg_rsp.status().code() Failed to create NeuronCore Group with KRTD Error. KRTD Status Code: 4, details: ""
As indicated in :ref:`appnote-performance-tuning`, for greater flexibility user can use NEURONCORE_GROUP_SIZES to specify the groupings of NeuronCores into Neuron devices, each device consisting of one or more NeuronCores. Each worker would take a device. The total number of NeuronCores taken by all the workers should be less than or equal the total number of NeuronCores visible to neuron-rtd. This situation should be considered at full load (MMS scales up to max_workers). Additionally, to properly assign model to Neuron device, the environment NEURONCORE_GROUP_SIZES must be specified within the model server class (ie. mxnet_model_service.py in the example above). For example, add the following line within mxnet_model_service.py for model compiled to 1 NeuronCore:
.. code:: python
os.environ['NEURONCORE_GROUP_SIZES'] = '1'
More information about max_worker limit setting can be found at `MMS Management API Documentation`_. For example, to run up to 4 workers in inf1.xlarge where 4 NeuronCores are available by default to Neuron-RTD, set max_workers to 4:
.. _MMS Management API Documentation: https://github.com/awslabs/multi-model-server/blob/master/docs/management_api.md#user-content-scale-workers
.. code:: bash
curl -v -X PUT "http://localhost:8081/models/squeezenet_v1.1_compiled?min_worker=1?max_worker=4"
MMS throws a "mxnet.base.MXNetError: array::at" error
-----------------------------------------------------
If you see “mxnet.base.MXNetError: array::at” when running MMS please check that NDArray/Gluon API is not used as they are not supported in MXNet-Neuron.
If you would like to use NDArray or Gluon API, please upgrade to MXNet 1.8.
.. code:: bash
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - array::at
[INFO ] W-9000-squeezenet_v1.1_compiled com.amazonaws.ml.mms.wlm.WorkerThread - Backend response time: 30
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Traceback (most recent call last):
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/tmp/models/6606fa046f68a34df87f15362a7a2d9a49749878/model_handler.py", line 82, in handle
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - data = self.inference(data)
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/tmp/models/6606fa046f68a34df87f15362a7a2d9a49749878/mxnet_model_service.py", line 153, in inference
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - d.wait_to_read()
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/home/user/regression_venv_p3.6/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 1819, in wait_to_read
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - check_call(_LIB.MXNDArrayWaitToRead(self.handle))
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - File "/home/user/regression_venv_p3.6/lib/python3.6/site-packages/mxnet/base.py", line 253, in check_call
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - raise MXNetError(py_str(_LIB.MXGetLastError()))
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - mxnet.base.MXNetError: array::at
[INFO ] W-9000-squeezenet_v1.1_compiled-stdout com.amazonaws.ml.mms.wlm.WorkerLifeCycle - Invoking custom service failed.
MXNet Model Server is not able to clean up Neuron RTD states after model is unloaded
------------------------------------------------------------------------------------
NOTE: This issue is resolved in version 1.5.1.1.1.88.0 released 11/17/2020 and only applies for MXNet 1.5.
MXNet Model Server is not able to clean up Neuron RTD states after model is unloaded (deleted) from model server. Restarting the model server may fail with "Failed to create NEURONCORE_GROUP" error:
.. code:: bash
mxnet.base.MXNetError: [00:26:59] src/operator/subgraph/neuron/./neuron_util.h:348: Check failed: 0 == create_eg_rsp.status().code(): Failed to create NEURONCORE_GROUP with Neuron-RTD Error. Neuron-RTD Status Code: 9, details: ""
The workaround is to run “`/opt/aws/neuron/bin/neuron-cli reset`“ to clear Neuron RTD states after all models are unloaded and server is shut down before restarting the model server.
Pipeline mode is not able to execute inferences requests in parallel
--------------------------------------------------------------------
If you see that multiple executors in a neuron pipeline setup (one model compiled for more than one neuron-cores using `--neuroncore-pipeline-cores` option during compilation) are not running in parallel, please set the following MXNet's environment variables before inference to allow mxnet to execute the CPU ops in parallel. Otherwise it will be sequential and stall the executors.
``MXNET_CPU_WORKER_NTHREADS`` is used to do that. Setting its value to ``__subgraph_opt_neuroncore__`` in the compiled model json will ensure that all the executors (threads) can be run in parallel.
Features only in MXNet-Neuron 1.5
---------------------------------
- Shared memory for IFMaps transfer to neuron runtime (has higher performance compared to GRPC mode)
- Neuron profiling using MXNet
Features only in MXNet-Neuron 1.8
---------------------------------
- Gluon API support
- Library mode neuron runtime
</pre></body></html>
|
2023-09-29T20:54:54.701Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/misc-mxnet-neuron.rst.txt
|
```
Misc (mxnet-neuron)
===================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/mxnet-neuron/troubleshooting-guide
What's New </release-notes/mxnet-neuron/mxnet-neuron>
/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet
.. include:: /frameworks/mxnet-neuron/misc-mxnet-neuron.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (mxnet-neuron)
===================
.. toctree::
:maxdepth: 1
:hidden:
/frameworks/mxnet-neuron/troubleshooting-guide
What's New </release-notes/mxnet-neuron/mxnet-neuron>
/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet
.. include:: /frameworks/mxnet-neuron/misc-mxnet-neuron.txt</pre></body></html>
|
2023-09-29T20:54:54.713Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/mxnet-neuron/flex-eg.rst.txt
|
```
.. _flexeg:
Flexible Execution Group (FlexEG) in Neuron-MXNet
=================================================
Introduction
------------
Inf1 instances are available with a different number of Inferentia
chips, each Inferentia chip is combined of 4 NeuronCores and an Inf1
instance includes 4 to 64 NeuronCores depending on the instance size.
With Neuron Runtime 1.x (neuron-rtd server), NeuronCores could be
combined into :ref:`NeuronCore Groups (NCG) <neuron-core-group>`,
which were basic scheduling units of compiled neural network in Neuron.
Creation of desired sized NCGs was done at the start of the application
and could not be modified afterwards.
Starting with Neuron SDK 1.16.0, and with the introduction of Neuron
Runtime 2.x, MXNet Neuron 1.8 introduces Flexible Execution Groups
(FlexEG) feature. With FlexEG, you do not have to create NCGs at the
start of the process, instead you will set the index of the first
NeuronCore you want to load models onto, and FlexEG feature will enable
the flexibility of loading models onto any available NeuronCore on the
inf1 instance starting from the first NeuronCore you set. This guide
will show you how to efficiently utilize NeuronCores using FlexEG
feature in NeuronMXNet.
FlexEG
------
With the introduction of FlexEG, you don’t need to create NCGs and can
load models onto a group of consecutive NeuronCores by providing the
index of the first NeuronCore in the group. Neuron runtime takes care of
figuring out the number of NeuronCores required for the given compiled
model and loads the model using the required number of cores
(sequentially starting with the NeuronCore index provided by the user).
For example, assuming that you have an Inf1.6xl machine and there are 4
models A, B, C, D compiled to 2, 4, 3, and 4 NeuronCores respectively,
you can map any model to any core by context
``mx.neuron(neuron_core_index)`` where ``neuron_core_index`` is the
NeuronCore index (0,1,2,3,4 … ).
In the example below, you map model A to ``mx.neuron(0)`` context, model
B to ``mx.neuron(2)`` context, model C to ``mx.neuron(6)`` context and
model D to ``mx.neuron(9)`` context.
.. figure:: /images/mx_FlexEG_arch_1.png
:scale: 80 %
The above configuration is achieved by using application code similar to
below:
.. code :: python
# Load models (MXNet)
# loaded into the 2 cores starting with core 0
sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0)
model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null')
# loaded into the 4 cores starting with core 2
sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0)
model1 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null')
# loaded into the 3 cores starting with core 6
sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0)
model2 = sym.bind(ctx=mx.neuron(6), args=args, aux_states=aux, grad_req='null')
# loaded into the 4 cores starting with core 9
sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0)
model3 = sym.bind(ctx=mx.neuron(9), args=args, aux_states=aux, grad_req='null')
# run inference by simply calling the loaded model
results0 = model0.forward(data=inputs0)
results1 = model1.forward(data=inputs1)
results2 = model2.forward(data=inputs2)
results3 = model3.forward(data=inputs3)
Since there is no NCG creation at the start of the process, you can load
the same four models but in a different configuration by changing the
context being used for inference. For example, you could map model C to
``mx.neuron(0)`` context, model A to ``mx.neuron(3)`` context, model D
to ``mx.neuron(5)`` context and model B to ``mx.neuron(9)`` context.
.. figure:: /images/mx_FlexEG_arch_2.png
:scale: 80 %
Migration from NeuronCore Groups to FlexEG
------------------------------------------
NeuronCore Groups are defined by setting the environment variable
``NEURONCORE_GROUP_SIZES`` with a comma separated list of number of
cores in each group. In this mode of operation, number of devices
(defined in ``NEURONCORE_GROUP_SIZES``) are grouped together to create a
single entity.
``NEURONCORE_GROUP_SIZES`` environment variable is set at runtime:
.. code :: python
#!/bin/bash
export NEURONCORE_GROUP_SIZES=2,4,3,4
python your_neuron_application.py
NeuronCore groups are created once at the start of the application and
cannot be modified / re-created till the application process runs. The
above flow creates 4 neuron devices with 2,4,3 and 4 devices each. In
order to get the same configuration as the example from before , you map
model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(1)``
context, model C to ``mx.neuron(2)`` context and model D to
``mx.neuron(3)`` context.
.. figure:: /images/mx_FlexEG_arch_1.png
:scale: 80 %
This can be achieved programmatically as shown below:
.. code :: python
# Set Environment
os.environ['NEURONCORE_GROUP_SIZES']='2,4,3,4'
# Load models (MXNet)
# loaded into the first group of NC0-NC1
sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0)
model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null')
# loaded into the second group of NC2-NC5
sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0)
model1 = sym.bind(ctx=mx.neuron(1), args=args, aux_states=aux, grad_req='null')
# loaded into the third group of NC6-NC8
sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0)
model2 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null')
# loaded into the fourth group of NC9-NC12
sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0)
model3 = sym.bind(ctx=mx.neuron(3), args=args, aux_states=aux, grad_req='null')
# run inference by simply calling the loaded model
results0 = model0.forward(data=inputs0)
results1 = model1.forward(data=inputs1)
results2 = model2.forward(data=inputs2)
results3 = model3.forward(data=inputs3)
So comparing to FlexEG, we see that in case of NCGs neuron context
requires the index of the execution group, while in FlexEG
neuron context requires the NeuronCore index of the first NeuronCore on which the
model is supposed to be loaded and executed. For example, with
``NEURONCORE_GROUP_SIZES='2,4,3,4'``, ``ctx=mx.neuron(1)`` loads the
model on execution group 1 which effectively loads the model on the 2nd NCG group
which has 4 NeuronCores.
Best practices when using FlexEG
--------------------------------
FlexEG gives the user most flexibility in terms of accessing cores and
loading models on specific cores. With this the users can effortlessly
load and execute new models on NeuronCores without closing the
application. Here we shall outline some of the best practices that
should be kept in mind while using FlexEG.
Choosing starting core
~~~~~~~~~~~~~~~~~~~~~~
FlexEG tries to use the required number of cores (based on the input
model) starting with the core index provided by the user. Incase the
system, doesnt have the required number of cores after the starting core
index, model load will fail. For example: We have a model X which needs
2 cores and an inf1.xl machine with 4 NeuronCores (NeuronCore indexes
are: 0, 1, 2 and 3). As the model needs at least 2 cores, valid start
indexes for this model are: 0, 1, 2. However if the user gives 3 as the
neuron context, then there are no 2 cores available starting from core
3. So it will fail.
Performance vs. Flexibility tradeoff
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While using data parallel model of operation (were models are executed
in parallel), for optimal performance the user should make sure that the
models are not sharing any cores. That is because NeuronCores can
execute one model at a time, when two or more models are executed on the
same core (assuming that they are already loaded), it executes the first model, stops it, starts the second
model and then executes it. This is called model switiching and involves
additional overhead and prevents execution on model in parallel. For
example: assuming that you have an Inf1.6xl machine and there are 4
models A, B, C, D compiled to 2, 4, 3, and 4 NeuronCores respectively.
Loading model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(2)``
context, model C to ``mx.neuron(6)`` context and model D to
``mx.neuron(9)`` context is a good configuration because no two models
are sharing NeuronCores and thus can be executed in parallel. However,
Loading model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(2)``
context, model C to ``mx.neuron(5)`` context and model D to
``mx.neuron(9)`` context is a not a good configuration as models B and C
share NeuronCore 5 and thus cannot be executed in parallel.
.. figure:: /images/mx_FlexEG_arch_bad.png
:scale: 80 %
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _flexeg:
Flexible Execution Group (FlexEG) in Neuron-MXNet
=================================================
Introduction
------------
Inf1 instances are available with a different number of Inferentia
chips, each Inferentia chip is combined of 4 NeuronCores and an Inf1
instance includes 4 to 64 NeuronCores depending on the instance size.
With Neuron Runtime 1.x (neuron-rtd server), NeuronCores could be
combined into :ref:`NeuronCore Groups (NCG) <neuron-core-group>`,
which were basic scheduling units of compiled neural network in Neuron.
Creation of desired sized NCGs was done at the start of the application
and could not be modified afterwards.
Starting with Neuron SDK 1.16.0, and with the introduction of Neuron
Runtime 2.x, MXNet Neuron 1.8 introduces Flexible Execution Groups
(FlexEG) feature. With FlexEG, you do not have to create NCGs at the
start of the process, instead you will set the index of the first
NeuronCore you want to load models onto, and FlexEG feature will enable
the flexibility of loading models onto any available NeuronCore on the
inf1 instance starting from the first NeuronCore you set. This guide
will show you how to efficiently utilize NeuronCores using FlexEG
feature in NeuronMXNet.
FlexEG
------
With the introduction of FlexEG, you don’t need to create NCGs and can
load models onto a group of consecutive NeuronCores by providing the
index of the first NeuronCore in the group. Neuron runtime takes care of
figuring out the number of NeuronCores required for the given compiled
model and loads the model using the required number of cores
(sequentially starting with the NeuronCore index provided by the user).
For example, assuming that you have an Inf1.6xl machine and there are 4
models A, B, C, D compiled to 2, 4, 3, and 4 NeuronCores respectively,
you can map any model to any core by context
``mx.neuron(neuron_core_index)`` where ``neuron_core_index`` is the
NeuronCore index (0,1,2,3,4 … ).
In the example below, you map model A to ``mx.neuron(0)`` context, model
B to ``mx.neuron(2)`` context, model C to ``mx.neuron(6)`` context and
model D to ``mx.neuron(9)`` context.
.. figure:: /images/mx_FlexEG_arch_1.png
:scale: 80 %
The above configuration is achieved by using application code similar to
below:
.. code :: python
# Load models (MXNet)
# loaded into the 2 cores starting with core 0
sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0)
model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null')
# loaded into the 4 cores starting with core 2
sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0)
model1 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null')
# loaded into the 3 cores starting with core 6
sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0)
model2 = sym.bind(ctx=mx.neuron(6), args=args, aux_states=aux, grad_req='null')
# loaded into the 4 cores starting with core 9
sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0)
model3 = sym.bind(ctx=mx.neuron(9), args=args, aux_states=aux, grad_req='null')
# run inference by simply calling the loaded model
results0 = model0.forward(data=inputs0)
results1 = model1.forward(data=inputs1)
results2 = model2.forward(data=inputs2)
results3 = model3.forward(data=inputs3)
Since there is no NCG creation at the start of the process, you can load
the same four models but in a different configuration by changing the
context being used for inference. For example, you could map model C to
``mx.neuron(0)`` context, model A to ``mx.neuron(3)`` context, model D
to ``mx.neuron(5)`` context and model B to ``mx.neuron(9)`` context.
.. figure:: /images/mx_FlexEG_arch_2.png
:scale: 80 %
Migration from NeuronCore Groups to FlexEG
------------------------------------------
NeuronCore Groups are defined by setting the environment variable
``NEURONCORE_GROUP_SIZES`` with a comma separated list of number of
cores in each group. In this mode of operation, number of devices
(defined in ``NEURONCORE_GROUP_SIZES``) are grouped together to create a
single entity.
``NEURONCORE_GROUP_SIZES`` environment variable is set at runtime:
.. code :: python
#!/bin/bash
export NEURONCORE_GROUP_SIZES=2,4,3,4
python your_neuron_application.py
NeuronCore groups are created once at the start of the application and
cannot be modified / re-created till the application process runs. The
above flow creates 4 neuron devices with 2,4,3 and 4 devices each. In
order to get the same configuration as the example from before , you map
model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(1)``
context, model C to ``mx.neuron(2)`` context and model D to
``mx.neuron(3)`` context.
.. figure:: /images/mx_FlexEG_arch_1.png
:scale: 80 %
This can be achieved programmatically as shown below:
.. code :: python
# Set Environment
os.environ['NEURONCORE_GROUP_SIZES']='2,4,3,4'
# Load models (MXNet)
# loaded into the first group of NC0-NC1
sym, args, aux = mx.model.load_checkpoint(mx_model0_file, 0)
model0 = sym.bind(ctx=mx.neuron(0), args=args, aux_states=aux, grad_req='null')
# loaded into the second group of NC2-NC5
sym, args, aux = mx.model.load_checkpoint(mx_model1_file, 0)
model1 = sym.bind(ctx=mx.neuron(1), args=args, aux_states=aux, grad_req='null')
# loaded into the third group of NC6-NC8
sym, args, aux = mx.model.load_checkpoint(mx_model2_file, 0)
model2 = sym.bind(ctx=mx.neuron(2), args=args, aux_states=aux, grad_req='null')
# loaded into the fourth group of NC9-NC12
sym, args, aux = mx.model.load_checkpoint(mx_model3_file, 0)
model3 = sym.bind(ctx=mx.neuron(3), args=args, aux_states=aux, grad_req='null')
# run inference by simply calling the loaded model
results0 = model0.forward(data=inputs0)
results1 = model1.forward(data=inputs1)
results2 = model2.forward(data=inputs2)
results3 = model3.forward(data=inputs3)
So comparing to FlexEG, we see that in case of NCGs neuron context
requires the index of the execution group, while in FlexEG
neuron context requires the NeuronCore index of the first NeuronCore on which the
model is supposed to be loaded and executed. For example, with
``NEURONCORE_GROUP_SIZES='2,4,3,4'``, ``ctx=mx.neuron(1)`` loads the
model on execution group 1 which effectively loads the model on the 2nd NCG group
which has 4 NeuronCores.
Best practices when using FlexEG
--------------------------------
FlexEG gives the user most flexibility in terms of accessing cores and
loading models on specific cores. With this the users can effortlessly
load and execute new models on NeuronCores without closing the
application. Here we shall outline some of the best practices that
should be kept in mind while using FlexEG.
Choosing starting core
~~~~~~~~~~~~~~~~~~~~~~
FlexEG tries to use the required number of cores (based on the input
model) starting with the core index provided by the user. Incase the
system, doesnt have the required number of cores after the starting core
index, model load will fail. For example: We have a model X which needs
2 cores and an inf1.xl machine with 4 NeuronCores (NeuronCore indexes
are: 0, 1, 2 and 3). As the model needs at least 2 cores, valid start
indexes for this model are: 0, 1, 2. However if the user gives 3 as the
neuron context, then there are no 2 cores available starting from core
3. So it will fail.
Performance vs. Flexibility tradeoff
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
While using data parallel model of operation (were models are executed
in parallel), for optimal performance the user should make sure that the
models are not sharing any cores. That is because NeuronCores can
execute one model at a time, when two or more models are executed on the
same core (assuming that they are already loaded), it executes the first model, stops it, starts the second
model and then executes it. This is called model switiching and involves
additional overhead and prevents execution on model in parallel. For
example: assuming that you have an Inf1.6xl machine and there are 4
models A, B, C, D compiled to 2, 4, 3, and 4 NeuronCores respectively.
Loading model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(2)``
context, model C to ``mx.neuron(6)`` context and model D to
``mx.neuron(9)`` context is a good configuration because no two models
are sharing NeuronCores and thus can be executed in parallel. However,
Loading model A to ``mx.neuron(0)`` context, model B to ``mx.neuron(2)``
context, model C to ``mx.neuron(5)`` context and model D to
``mx.neuron(9)`` context is a not a good configuration as models B and C
share NeuronCore 5 and thus cannot be executed in parallel.
.. figure:: /images/mx_FlexEG_arch_bad.png
:scale: 80 %
</pre></body></html>
|
2023-09-29T20:54:54.727Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.rst.txt
|
```
.. _neuron-cc-ops-mxnet:
Neuron Apache MXNet (Incubating) Supported operators
====================================================
To see a list of supported operators for MXNet, run the following command:
``neuron-cc list-operators --framework MXNET``
.. _neuron-compiler-release-1600:
Neuron Compiler Release [1.6.13.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
amp_cast
amp_multicast
.. _neuron-compiler-release-1410:
Neuron Compiler Release [1.4.1.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1400:
Neuron Compiler Release [1.4.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1300:
Neuron Compiler Release [1.3.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1270:
Neuron Compiler Release [1.2.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1220:
Neuron Compiler Release [1.2.2.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1200:
Neuron Compiler Release [1.2.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
Deconvolution
LayerNorm
Pad
SwapAxis
_contrib_arange_like
_contrib_interleaved_matmul_encdec_qk
_contrib_interleaved_matmul_encdec_valatt
_contrib_interleaved_matmul_selfatt_qk
_contrib_interleaved_matmul_selfatt_valatt
arctan
broadcast_like
cos
erf
pad
sin
slice_axis
.. _neuron-compiler-release-10240450:
Neuron Compiler Release [1.0.24045.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added ``_contrib_div_sqrt_dim``, ``broadcast_axis``
.. _neuron-compiler-release-10180010:
Neuron Compiler Release [1.0.18001.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-10179370:
Neuron Compiler Release [1.0.17937.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-10168610:
Neuron Compiler Release [1.0.16861.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Removed ``log`` (Was erroneously reported as added in previous release.
)
.. _neuron-compiler-release-1015275:
Neuron Compiler Release [1.0.15275]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added ``log``
.. _neuron-compiler-release-1012696:
Neuron Compiler Release [1.0.12696]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-109410:
Neuron Compiler Release [1.0.9410]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-107878:
Neuron Compiler Release [1.0.7878]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-106801:
Neuron Compiler Release [1.0.6801]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-105939:
Neuron Compiler Release [1.0.5939]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
no changes
.. _neuron-compiler-release-105301:
Neuron Compiler Release [1.0.5301]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
no changes
.. _neuron-compiler-release-1046800:
Neuron Compiler Release [1.0.4680.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
Activation
BatchNorm
Cast
Concat
Convolution
Convolution_v1
Dropout
Flatten
FullyConnected
LeakyReLU
Pooling
Pooling_v1
RNN
Reshape
SequenceMask
SliceChannel
Softmax
UpSampling
__add_scalar__
__div_scalar__
__mul_scalar__
__pow_scalar__
__rdiv_scalar__
__rpow_scalar__
__rsub_scalar__
__sub_scalar__
_arange
_copy
_div_scalar
_equal_scalar
_full
_greater_equal_scalar
_greater_scalar
_lesser_equal_scalar
_lesser_scalar
_maximum
_maximum_scalar
_minimum
_minimum_scalar
_minus_scalar
_mul_scalar
_not_equal_scalar
_ones
_plus_scalar
_power_scalar
_rdiv_scalar
_rminus_scalar
_rnn_param_concat
_zeros
batch_dot
broadcast_add
broadcast_div
broadcast_equal
broadcast_greater
broadcast_greater_equal
broadcast_lesser
broadcast_lesser_equal
broadcast_maximum
broadcast_minimum
broadcast_mod
broadcast_mul
broadcast_not_equal
broadcast_sub
ceil
clip
concat
elemwise_add
elemwise_div
elemwise_mul
elemwise_sub
exp
expand_dims
flatten
floor
gather_nd
log
log_softmax
max
mean
min
negative
ones_like
relu
repeat
reshape
reshape_like
reverse
rsqrt
sigmoid
slice
slice_like
softmax
split
sqrt
square
squeeze
stack
sum
tanh
tile
transpose
where
zeros_like
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-cc-ops-mxnet:
Neuron Apache MXNet (Incubating) Supported operators
====================================================
To see a list of supported operators for MXNet, run the following command:
``neuron-cc list-operators --framework MXNET``
.. _neuron-compiler-release-1600:
Neuron Compiler Release [1.6.13.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
amp_cast
amp_multicast
.. _neuron-compiler-release-1410:
Neuron Compiler Release [1.4.1.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1400:
Neuron Compiler Release [1.4.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1300:
Neuron Compiler Release [1.3.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1270:
Neuron Compiler Release [1.2.7.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1220:
Neuron Compiler Release [1.2.2.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-1200:
Neuron Compiler Release [1.2.0.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added
::
Deconvolution
LayerNorm
Pad
SwapAxis
_contrib_arange_like
_contrib_interleaved_matmul_encdec_qk
_contrib_interleaved_matmul_encdec_valatt
_contrib_interleaved_matmul_selfatt_qk
_contrib_interleaved_matmul_selfatt_valatt
arctan
broadcast_like
cos
erf
pad
sin
slice_axis
.. _neuron-compiler-release-10240450:
Neuron Compiler Release [1.0.24045.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added ``_contrib_div_sqrt_dim``, ``broadcast_axis``
.. _neuron-compiler-release-10180010:
Neuron Compiler Release [1.0.18001.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-10179370:
Neuron Compiler Release [1.0.17937.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-10168610:
Neuron Compiler Release [1.0.16861.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Removed ``log`` (Was erroneously reported as added in previous release.
)
.. _neuron-compiler-release-1015275:
Neuron Compiler Release [1.0.15275]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Added ``log``
.. _neuron-compiler-release-1012696:
Neuron Compiler Release [1.0.12696]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-109410:
Neuron Compiler Release [1.0.9410]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-107878:
Neuron Compiler Release [1.0.7878]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-106801:
Neuron Compiler Release [1.0.6801]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No changes
.. _neuron-compiler-release-105939:
Neuron Compiler Release [1.0.5939]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
no changes
.. _neuron-compiler-release-105301:
Neuron Compiler Release [1.0.5301]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
no changes
.. _neuron-compiler-release-1046800:
Neuron Compiler Release [1.0.4680.0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
Activation
BatchNorm
Cast
Concat
Convolution
Convolution_v1
Dropout
Flatten
FullyConnected
LeakyReLU
Pooling
Pooling_v1
RNN
Reshape
SequenceMask
SliceChannel
Softmax
UpSampling
__add_scalar__
__div_scalar__
__mul_scalar__
__pow_scalar__
__rdiv_scalar__
__rpow_scalar__
__rsub_scalar__
__sub_scalar__
_arange
_copy
_div_scalar
_equal_scalar
_full
_greater_equal_scalar
_greater_scalar
_lesser_equal_scalar
_lesser_scalar
_maximum
_maximum_scalar
_minimum
_minimum_scalar
_minus_scalar
_mul_scalar
_not_equal_scalar
_ones
_plus_scalar
_power_scalar
_rdiv_scalar
_rminus_scalar
_rnn_param_concat
_zeros
batch_dot
broadcast_add
broadcast_div
broadcast_equal
broadcast_greater
broadcast_greater_equal
broadcast_lesser
broadcast_lesser_equal
broadcast_maximum
broadcast_minimum
broadcast_mod
broadcast_mul
broadcast_not_equal
broadcast_sub
ceil
clip
concat
elemwise_add
elemwise_div
elemwise_mul
elemwise_sub
exp
expand_dims
flatten
floor
gather_nd
log
log_softmax
max
mean
min
negative
ones_like
relu
repeat
reshape
reshape_like
reverse
rsqrt
sigmoid
slice
slice_like
softmax
split
sqrt
square
squeeze
stack
sum
tanh
tile
transpose
where
zeros_like
</pre></body></html>
|
2023-09-29T20:54:54.734Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/frameworks/mxnet-neuron/developer-guide.rst.txt
|
```
Developer Guide
===============
.. toctree::
:maxdepth: 1
:hidden:
/general/appnotes/mxnet-neuron/flex-eg
.. include:: /frameworks/mxnet-neuron/developer-guide.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Developer Guide
===============
.. toctree::
:maxdepth: 1
:hidden:
/general/appnotes/mxnet-neuron/flex-eg
.. include:: /frameworks/mxnet-neuron/developer-guide.txt</pre></body></html>
|
2023-09-29T20:54:54.739Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.