title
stringlengths 0
125
| url
stringlengths 67
206
| markdown
stringlengths 55
86.1k
| html
stringlengths 198
350k
| crawlDate
stringlengths 24
24
|
---|---|---|---|---|
Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/mxnet/resnet50/resnet50.html
|
# Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia — AWS Neuron Documentation
## Contents
- [Introduction:](#Introduction:)
- [Warning](#Warning)
- [Compile model on Neuron](#Compile-model-on-Neuron)
- [Deploy on Inferentia](#Deploy-on-Inferentia)
## Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia[#](#Running-Neuron-Apache-MXNet-(Incubating)-ResNet50-on-Inferentia "Permalink to this headline")
## Introduction:[#](#Introduction: "Permalink to this headline")
In this tutorial we will compile and deploy ResNet50 model for Inferentia. In this tutorial we provide two main sections:
1.Compile the ResNet50 model.
2.Infer the compiled model.
Before running the following verify this Jupyter notebook is running “conda\_aws\_neuron\_mxnet\_p36” kernel. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page. Neuron supports Python module, Symbol APIs and the C predict API. The following quick start example uses the Symbol API.
## Compile model on Neuron[#](#Compile-model-on-Neuron "Permalink to this headline")
The following step will compile the resnet50 model. Compilation will take a few minutes on inf1.6xlarge. At the end of compilation, the files resnet-50\_compiled-0000.params and resnet-50\_compiled-symbol.json will be created in local directory.
```
import mxnet as mx
import numpy as np
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')
mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')
sym, args, aux = mx.model.load_checkpoint('resnet-50', 0)
# Compile for Inferentia using Neuron
inputs = { "data" : mx.nd.ones([1,3,224,224], name='data', dtype='float32') }
sym, args, aux = mx.contrib.neuron.compile(sym, args, aux, inputs)
#save compiled model
mx.model.save_checkpoint("resnet-50_compiled", 0, sym, args, aux)
```
## Deploy on Inferentia[#](#Deploy-on-Inferentia "Permalink to this headline")
Using same instance to deploy the model.
```
import mxnet as mx
import numpy as np
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'synset.txt')
fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')
img = mx.image.imread(fname)# convert into format (batch, RGB, width, height)
img = mx.image.imresize(img, 224, 224) # resize
img = img.transpose((2, 0, 1)) # Channel first
img = img.expand_dims(axis=0) # batchify
img = img.astype(dtype='float32')
sym, args, aux = mx.model.load_checkpoint('resnet-50_compiled', 0)
softmax = mx.nd.random_normal(shape=(1,))
args['softmax_label'] = softmax
args['data'] = img
# Inferentia context
ctx = mx.neuron()
exe = sym.bind(ctx=ctx, args=args, aux_states=aux, grad_req='null')
with open('synset.txt', 'r') as f:
labels = [l.rstrip() for l in f]
exe.forward(data=img)
prob = exe.outputs[0].asnumpy()# print the top-5
prob = np.squeeze(prob)
a = np.argsort(prob)[::-1]
for i in a[0:5]:
print('probability=%f, class=%s' %(prob[i], labels[i]))
# Sample output will look like below:
#probability=0.634792, class=n02123045 tabby, tabby cat
#probability=0.193601, class=n02123159 tiger cat
#probability=0.103627, class=n02124075 Egyptian cat
#probability=0.031604, class=n02127052 lynx, catamount
#probability=0.015892, class=n02129604 tiger, Panthera tigris
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/mxnet/resnet50/resnet50", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/mxnet/resnet50/resnet50.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/mxnet/resnet50/resnet50.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/mxnet/resnet50/resnet50.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Warning">
Warning
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-model-on-Neuron">
Compile model on Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-on-Inferentia">
Deploy on Inferentia
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Warning">
Warning
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-model-on-Neuron">
Compile model on Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Deploy-on-Inferentia">
Deploy on Inferentia
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Running-Neuron-Apache-MXNet-(Incubating)-ResNet50-on-Inferentia">
<h1>Running Neuron Apache MXNet (Incubating) ResNet50 on Inferentia<a class="headerlink" href="#Running-Neuron-Apache-MXNet-(Incubating)-ResNet50-on-Inferentia" title="Permalink to this headline">#</a></h1>
<div class="section" id="Introduction:">
<h2>Introduction:<a class="headerlink" href="#Introduction:" title="Permalink to this headline">#</a></h2>
<p>In this tutorial we will compile and deploy ResNet50 model for Inferentia. In this tutorial we provide two main sections:</p>
<p>1.Compile the ResNet50 model.</p>
<p>2.Infer the compiled model.</p>
<p>Before running the following verify this Jupyter notebook is running “conda_aws_neuron_mxnet_p36” kernel. You can select the Kernel from the “Kernel -> Change Kernel” option on the top of this Jupyter notebook page. Neuron supports Python module, Symbol APIs and the C predict API. The following quick start example uses the Symbol API.</p>
<div class="section" id="Warning">
<h3>Warning<a class="headerlink" href="#Warning" title="Permalink to this headline">#</a></h3>
<p>This tutorial was tested on MXNet-1.5</p>
<p>MXNet-1.5 entered maintenance mode and require Neuron runtime 1.0, please see : <a class="reference external" href="../../../../release-notes/maintenance.html">MXNet-1.5 enters maintainence mode</a></p>
<p>To setup development environment for MXNet-1.5 see installation instructions for Neuron 1.15.1 : <a class="reference external" href="../../../../frameworks/mxnet-neuron/setup/mxnet-install.html">Neuron-1.15.1 MXNet install</a></p>
</div>
</div>
<div class="section" id="Compile-model-on-Neuron">
<h2>Compile model on Neuron<a class="headerlink" href="#Compile-model-on-Neuron" title="Permalink to this headline">#</a></h2>
<p>The following step will compile the resnet50 model. Compilation will take a few minutes on inf1.6xlarge. At the end of compilation, the files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="n">path</span><span class="o">=</span><span class="s1">'http://data.mxnet.io/models/imagenet/'</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-0000.params'</span><span class="p">)</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-symbol.json'</span><span class="p">)</span>
<span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">load_checkpoint</span><span class="p">(</span><span class="s1">'resnet-50'</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="c1"># Compile for Inferentia using Neuron</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span> <span class="s2">"data"</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">224</span><span class="p">,</span><span class="mi">224</span><span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'data'</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span> <span class="p">}</span>
<span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">contrib</span><span class="o">.</span><span class="n">neuron</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span><span class="p">,</span> <span class="n">inputs</span><span class="p">)</span>
<span class="c1">#save compiled model</span>
<span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">save_checkpoint</span><span class="p">(</span><span class="s2">"resnet-50_compiled"</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>ls
</pre></div>
</div>
</div>
</div>
<div class="section" id="Deploy-on-Inferentia">
<h2>Deploy on Inferentia<a class="headerlink" href="#Deploy-on-Inferentia" title="Permalink to this headline">#</a></h2>
<p>Using same instance to deploy the model.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="n">path</span><span class="o">=</span><span class="s1">'http://data.mxnet.io/models/imagenet/'</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'synset.txt'</span><span class="p">)</span>
<span class="n">fname</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="s1">'https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true'</span><span class="p">)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">imread</span><span class="p">(</span><span class="n">fname</span><span class="p">)</span><span class="c1"># convert into format (batch, RGB, width, height)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">imresize</span><span class="p">(</span><span class="n">img</span><span class="p">,</span> <span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">)</span> <span class="c1"># resize</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">transpose</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span> <span class="c1"># Channel first</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="c1"># batchify</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span>
<span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">load_checkpoint</span><span class="p">(</span><span class="s1">'resnet-50_compiled'</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">softmax</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">random_normal</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,))</span>
<span class="n">args</span><span class="p">[</span><span class="s1">'softmax_label'</span><span class="p">]</span> <span class="o">=</span> <span class="n">softmax</span>
<span class="n">args</span><span class="p">[</span><span class="s1">'data'</span><span class="p">]</span> <span class="o">=</span> <span class="n">img</span>
<span class="c1"># Inferentia context</span>
<span class="n">ctx</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">neuron</span><span class="p">()</span>
<span class="n">exe</span> <span class="o">=</span> <span class="n">sym</span><span class="o">.</span><span class="n">bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">ctx</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="n">args</span><span class="p">,</span> <span class="n">aux_states</span><span class="o">=</span><span class="n">aux</span><span class="p">,</span> <span class="n">grad_req</span><span class="o">=</span><span class="s1">'null'</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'synset.txt'</span><span class="p">,</span> <span class="s1">'r'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">labels</span> <span class="o">=</span> <span class="p">[</span><span class="n">l</span><span class="o">.</span><span class="n">rstrip</span><span class="p">()</span> <span class="k">for</span> <span class="n">l</span> <span class="ow">in</span> <span class="n">f</span><span class="p">]</span>
<span class="n">exe</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">img</span><span class="p">)</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">exe</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span><span class="c1"># print the top-5</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">squeeze</span><span class="p">(</span><span class="n">prob</span><span class="p">)</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">argsort</span><span class="p">(</span><span class="n">prob</span><span class="p">)[::</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">a</span><span class="p">[</span><span class="mi">0</span><span class="p">:</span><span class="mi">5</span><span class="p">]:</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'probability=</span><span class="si">%f</span><span class="s1">, class=</span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span><span class="p">(</span><span class="n">prob</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">labels</span><span class="p">[</span><span class="n">i</span><span class="p">]))</span>
<span class="c1"># Sample output will look like below:</span>
<span class="c1">#probability=0.634792, class=n02123045 tabby, tabby cat</span>
<span class="c1">#probability=0.193601, class=n02123159 tiger cat</span>
<span class="c1">#probability=0.103627, class=n02124075 Egyptian cat</span>
<span class="c1">#probability=0.031604, class=n02127052 lynx, catamount</span>
<span class="c1">#probability=0.015892, class=n02129604 tiger, Panthera tigris</span>
</pre></div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:55.001Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/mxnet-neuron/mxnet-neuron.rst.txt
|
```
.. _mxnet-neuron-rn:
Apache MXNet Neuron (Incubating) Release Notes
==============================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for MXNet-Neuron framework.
Apache MXNet Neuron release [1.8.0.2.4.10.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
Apache MXNet Neuron release [1.8.0.2.4.9.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
Apache MXNet Neuron release [1.8.0.2.4.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 5/1/2023
New in this release
-------------------
* Updated Neuron Runtime library to version 2.12
* Added missing LICENSE.txt
Known Issues and Limitations
----------------------------
* Bert-base in 16 NeuronCores pipeline mode has 50% lower performance when running 16 inferences in parallel with Runtime version 2.12.
[1.5.1.1.10.39.0]
^^^^^^^^^^^^^^^^^
Date: 5/1/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
This is the last released version. Please use neuron-cc version 1.15.0 only for this mxnet-neuron version. Also, this version is limited to python 3.9 or below only.
.. code:: bash
python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0
Apache MXNet Neuron release [1.8.0.2.2.127.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 3/28/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
[1.5.1.1.10.37.0]
^^^^^^^^^^^^^^^^^
Date: 3/28/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
Apache MXNet Neuron release [1.8.0.2.2.43.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
[1.5.1.1.10.11.0]
^^^^^^^^^^^^^^^^^
Date: 11/23/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
[1.5.1.1.10.0.0]
^^^^^^^^^^^^^^^^
Date: 04/28/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
Apache MXNet Neuron release [1.8.0.2.2.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
New in this release
-------------------
* Added support for unloading models from a NeuronDevice by deleting the model instance in user application. Users can now call ``del`` in Python on an executor and to unload the model from a NeuronDevice (provided the deleted executor is the last executor pointing to the given model). This requires the latest ``aws-mx-1.8`` package from ``https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl``.
Bug fixes
---------
* Fixed a memory leak caused by stale unloaded models in NeuronDevice memory. For this fix to take effect please install aws-mx package from https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl along with the latest mx-neuron package.
[1.5.1.1.9.0.0]
^^^^^^^^^^^^^^^
Date: 03/25/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
Apache MXNet Neuron release [1.8.0.2.1.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
New in this release
-------------------
* Added support of ``mx_neuron.__version__`` to get the build version of MXNet Neuron plugin
Bug fixes
---------
* Fixed assertion errors when inference was completed with NaNs. The expected behavior is to complete inference successfully and warn the
user that ``NaN``s were seen during the current inference.
* Fixed compile issue when individual output nodes have multiple output nodes. Because the output index was being dropped, fewer number
of output feature maps were being considered and that caused failures during inference.
Apache MXNet Neuron release [1.8.0.2.0.276.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
Apache MXNet Neuron release [1.8.0.2.0.271.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date 10/27/2021
New in this release
-------------------
- MXNet Neuron 1.8 now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
- Introducing Flexible Execution Groups (FlexEG) feature. See :ref:`flexeg` application note.
Resolved Issues
---------------
- Fixed a bug that prevented compilation of gluon models with multiple
cpu and neuron nodes.
- Added more debug logic to help with profiling of model load timing.
[1.5.1.1.7.0.0]
^^^^^^^^^^^^^^^
Date 10/27/2021
New in this release
-------------------
- MXNet 1.5 enters maintenance mode. Please visit :ref:`maintenance_mxnet_1_5` for more
information.
Resolved Issues
---------------
- Minor bug fixes.
[1.5.1.1.6.5.0]
^^^^^^^^^^^^^^^
Date 08/12/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
[1.8.0.1.3.4.0]
^^^^^^^^^^^^^^^
Date 08/12/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
[1.5.1.1.6.1.0]
^^^^^^^^^^^^^^^
Date 07/02/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
[1.8.0.1.3.0.0]
^^^^^^^^^^^^^^^
Date 07/02/2021
Summary
-------
Support for Autoloop, Cpredict API and minor bug fixes and enhancements for MXNet 1.8 Neuron.
Major New Features
------------------
- Added support for Autoloop feature for MXNet 1.8 Neuron.
Resolved Issues
---------------
- Added support for CPredict API.
[1.8.0.1.2.1.0]
^^^^^^^^^^^^^^^
Date 5/28/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron
Resolved Issues
---------------
- Added support for Neuron profiler
[1.8.0.1.1.2.0]
^^^^^^^^^^^^^^^
Date 4/30/2021
Summary
-------
Initial release of Apache MXNet (Incubating) 1.8 for Neuron
Major New Features
------------------
- Gluon API and Neuron support for NLP BERT models
- Neuron is now a plugin
- Please note new API changes to support plugin mode: :ref:`ref-mxnet-neuron-compilation-python-api`
[1.5.1.1.4.x.x]
^^^^^^^^^^^^^^^
Date 5/28/2021
Summary
-------
- Minor enhancements.
[1.5.1.1.4.4.0]
^^^^^^^^^^^^^^^
Date 4/30/2021
Summary
-------
- Resolve an issue with Neuron profiling.
Resolved Issues
---------------
- Issue: when Neuron profiling is enabled in MXNet-Neuron 1.5.1 (using NEURON_PROFILE=<dir>), and TensorBoard is used to read in the profiled data, user would see an error messsage "panic: runtime error: index out of range". This issue is resolved in this release.
[1.5.1.1.3.8.0]
^^^^^^^^^^^^^^^
Date 3/4/2021
Summary
-------
Minor enhancements.
[1.5.1.1.3.7.0]
^^^^^^^^^^^^^^^
Date 2/24/2021
Summary
-------
Fix for CVE-2021-3177.
[1.5.1.1.3.2.0]
^^^^^^^^^^^^^^^
Date 1/30/2021
Summary
-------
Various minor improvements
[1.5.1.1.2.1.0]
^^^^^^^^^^^^^^^
Date 12/23/2020
Summary
-------
Various minor improvements
[1.5.1.1.1.88.0]
^^^^^^^^^^^^^^^^
Date 11/17/2020
Summary
-------
This release includes the bug fix for MXNet Model Server not being able to clean up
Neuron RTD states after model is unloaded (deleted) from model server.
Resolved Issues
---------------
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server.
- Workaround for earlier versions: run “\ ``/opt/aws/neuron/bin/neuron-cli reset``\ “ to
clear Neuron RTD states after all models are unloaded and server is
shut down.
[1.5.1.1.1.52.0]
^^^^^^^^^^^^^^^^
Date 09/22/2020
Summary
-------
Various minor improvements.
Major New Features
------------------
Resolved Issues
---------------
- Issue: When first importing MXNet into python process and subprocess
call is invoked, user may get an OSError exception "OSError: [Errno
14] Bad address" during subprocess call (see
https://github.com/apache/incubator-mxnet/issues/13875 for more
details). This issue is fixed with a mitigation patch from MXNet for
Open-MP fork race conditions.
- Workaround for earlier versions: Export KMP_INIT_AT_FORK=false
before running python process.
.. _1511110:
[1.5.1.1.1.1.0]
^^^^^^^^^^^^^^^
Date 08/08/2020
.. _summary-1:
Summary
-------
Various minor improvements.
.. _major-new-features-1:
Major New Features
------------------
.. _resolved-issues-1:
Resolved Issues
---------------
.. _1511021010:
[1.5.1.1.0.2101.0]
^^^^^^^^^^^^^^^^^^
Date 08/05/2020
.. _summary-2:
Summary
-------
Various minor improvements.
.. _major-new-features-2:
Major New Features
------------------
.. _resolved-issues-2:
Resolved Issues
---------------
.. _1511020930:
[1.5.1.1.0.2093.0]
^^^^^^^^^^^^^^^^^^
Date 07/16/2020
.. _summary-3:
Summary
-------
This release contains a few bug fixes and user experience improvements.
.. _major-new-features-3:
Major New Features
------------------
.. _resolved-issues-3:
Resolved Issues
---------------
- User can specify NEURONCORE_GROUP_SIZES without brackets (for
example, "1,1,1,1"), as can be done in TensorFlow-Neuron and
PyTorch-Neuron.
- Fixed a memory leak when inferring neuron subgraph properties
- Fixed a bug dealing with multi-input subgraphs
.. _1511020330:
[1.5.1.1.0.2033.0]
^^^^^^^^^^^^^^^^^^
Date 6/11/2020
.. _summary-4:
Summary
-------
- Added support for profiling during inference
.. _major-new-features-4:
Major New Features
------------------
- Profiling can now be enabled by specifying the profiling work
directory using NEURON_PROFILE environment variable during inference.
For an example of using profiling, see :ref:`tensorboard-neuron`.
(Note that graph view of MXNet graph is not available via
TensorBoard).
.. _resolved-issues-4:
Resolved Issues
---------------
Known Issues and Limitations
----------------------------
Other Notes
-----------
.. _1511019000:
[1.5.1.1.0.1900.0]
^^^^^^^^^^^^^^^^^^
Date 5/11/2020
.. _summary-5:
Summary
-------
Improved support for shared-memory communication with Neuron-Runtime.
.. _major-new-features-5:
Major New Features
------------------
- Added support for the BERT-Base model (base: L-12 H-768 A-12), max
sequence length 64 and batch size of 8.
- Improved security for usage of shared-memory for data transfer
between framework and Neuron-Runtime
- Improved allocation and cleanup of shared-memory resource
- Improved container support by automatic falling back to GRPC data
transfer if shared-memory cannot be allocated by Neuron-Runtime
.. _resolved-issues-5:
Resolved Issues
---------------
- User is unable to allocate Neuron-Runtime shared-memory resource when
using MXNet-Neuron in a container to communicate with Neuron-Runtime
in another container. This is resolved by automatic falling back to
GRPC data transfer if shared-memory cannot be allocated by
Neuron-Runtime.
- Fixed issue where some large models could not be loaded on
inferentia.
.. _known-issues-and-limitations-1:
Known Issues and Limitations
----------------------------
.. _other-notes-1:
Other Notes
-----------
.. _1511015960:
[1.5.1.1.0.1596.0]
^^^^^^^^^^^^^^^^^^
Date 3/26/2020
.. _summary-6:
Summary
-------
No major changes or fixes
.. _major-new-features-6:
Major New Features
------------------
.. _resolved-issues-6:
Resolved Issues
---------------
.. _known-issues-and-limitations-2:
Known Issues and Limitations
----------------------------
.. _other-notes-2:
Other Notes
-----------
.. _1511014980:
[1.5.1.1.0.1498.0]
^^^^^^^^^^^^^^^^^^
Date 2/27/2020
.. _summary-7:
Summary
-------
No major changes or fixes.
.. _major-new-features-7:
Major New Features
------------------
.. _resolved-issues-7:
Resolved Issues
---------------
The issue(s) below are resolved:
- Latest pip version 20.0.1 breaks installation of MXNet-Neuron pip
wheel which has py2.py3 in the wheel name.
.. _known-issues-and-limitations-3:
Known Issues and Limitations
----------------------------
- User is unable to allocate Neuron-Runtime shared-memory resource when
using MXNet-Neuron in a container to communicate with Neuron-Runtime
in another container. To work-around, please set environment variable
NEURON_RTD_USE_SHM to 0.
.. _other-notes-3:
Other Notes
-----------
.. _1511014010:
[1.5.1.1.0.1401.0]
^^^^^^^^^^^^^^^^^^
Date 1/27/2020
.. _summary-8:
Summary
-------
No major changes or fixes.
.. _major-new-features-8:
Major New Features
------------------
.. _resolved-issues-8:
Resolved Issues
---------------
- The following issue is resolved when the latest multi-model-server
with version >= 1.1.0 is used with MXNet-Neuron. You would still need
to use "``/opt/aws/neuron/bin/neuron-cli reset``" to clear all Neuron
RTD states after multi-model-server is exited:
- Issue: MXNet Model Server is not able to clean up Neuron RTD
states after model is unloaded (deleted) from model server and
previous workaround "``/opt/aws/neuron/bin/neuron-cli reset``" is
unable to clear all Neuron RTD states.
.. _known-issues-and-limitations-4:
Known Issues and Limitations
----------------------------
- Latest pip version 20.0.1 breaks installation of MXNet-Neuron pip
wheel which has py2.py3 in the wheel name. This breaks all existing
released versions. The error looks like:
::
Looking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com
ERROR: Could not find a version that satisfies the requirement mxnet-neuron (from versions: none)
ERROR: No matching distribution found for mxnet-neuron
- Work around: install the older version of pip using "pip install
pip==19.3.1".
.. _other-notes-4:
Other Notes
-----------
.. _1511013250:
[1.5.1.1.0.1325.0]
^^^^^^^^^^^^^^^^^^
Date 12/1/2019
.. _summary-9:
Summary
-------
.. _major-new-features-9:
Major New Features
------------------
.. _resolved-issues-9:
Resolved Issues
---------------
- Issue: Compiler flags cannot be passed to compiler during compile
call. The fix: compiler flags can be passed to compiler during
compile call using “flags” option followed by a list of flags.
- Issue: Advanced CPU fallback option is a way to attempt to improve
the number of operators on Inferentia. The default is currently set
to on, which may cause failures. The fix: This option is now off by
default.
.. _known-issues-and-limitations-5:
Known Issues and Limitations
----------------------------
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server and previous
workaround "``/opt/aws/neuron/bin/neuron-cli reset``" is unable to
clear all Neuron RTD states.
- Workaround: run “\ ``sudo systemctl restart neuron-rtd``\ “ to
clear Neuron RTD states after all models are unloaded and server
is shut down.
.. _other-notes-5:
Other Notes
-----------
.. _1511013490:
[1.5.1.1.0.1349.0]
^^^^^^^^^^^^^^^^^^
Date 12/20/2019
.. _summary-10:
Summary
-------
No major changes or fixes. Released with other Neuron packages.
.. _1511013250-1:
[1.5.1.1.0.1325.0]
^^^^^^^^^^^^^^^^^^
Date 12/1/2019
.. _summary-11:
Summary
-------
.. _major-new-features-10:
Major New Features
------------------
.. _resolved-issues-10:
Resolved Issues
---------------
- Issue: Compiler flags cannot be passed to compiler during compile
call. The fix: compiler flags can be passed to compiler during
compile call using “flags” option followed by a list of flags.
- Issue: Advanced CPU fallback option is a way to attempt to improve
the number of operators on Inferentia. The default is currently set
to on, which may cause failures. The fix: This option is now off by
default.
.. _known-issues-and-limitations-6:
Known Issues and Limitations
----------------------------
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server and previous
workaround "``/opt/aws/neuron/bin/neuron-cli reset``" is unable to
clear all Neuron RTD states.
- Workaround: run “\ ``sudo systemctl restart neuron-rtd``\ “ to
clear Neuron RTD states after all models are unloaded and server
is shut down.
.. _other-notes-6:
Other Notes
-----------
.. _1511012600:
[1.5.1.1.0.1260.0]
^^^^^^^^^^^^^^^^^^
Date: 11/25/2019
.. _summary-12:
Summary
-------
This version is available only in released DLAMI v26.0 and is based on
MXNet version 1.5.1. Please :ref:`dlami-rn-known-issues` to latest version.
.. _major-new-features-11:
Major new features
------------------
.. _resolved-issues-11:
Resolved issues
---------------
.. _known-issues-and-limitations-7:
Known issues and limitations
----------------------------
- Issue: Compiler flags cannot be passed to compiler during compile
call.
- Issue: Advanced CPU fallback option is a way to attempt to improve
the number of operators on Inferentia. The default is currently set
to on, which may cause failures.
- Workaround: explicitly turn it off by setting compile option
op_by_op_compiler_retry to 0.
- Issue: Temporary files are put in current directory when debug is
enabled.
- Workaround: create a separate work directory and run the process
from within the work directory
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server.
- Workaround: run “\ ``/opt/aws/neuron/bin/neuron-cli reset``\ “ to
clear Neuron RTD states after all models are unloaded and server
is shut down.
- Issue: MXNet 1.5.1 may return inconsistent node names for some
operators when they are the primary outputs of a Neuron subgraph.
This causes failures during inference.
- Workaround : Use the ``excl_node_names`` compilation option to
change the partitioning of the graph during compile so that these
nodes are not the primary output of a neuron subgraph. See
:ref:`ref-mxnet-neuron-compilation-python-api`
.. code:: python
compile_args = { 'excl_node_names': ["node_name_to_exclude"] }
Models Supported
----------------
The following models have successfully run on neuron-inferentia systems
1. Resnet50 V1/V2
2. Inception-V2/V3/V4
3. Parallel-WaveNet
4. Tacotron 2
5. WaveRNN
.. _other-notes-7:
Other Notes
-----------
- Python versions supported:
- 3.5, 3.6, 3.7
- Linux distribution supported:
- Ubuntu 18, Amazon Linux 2
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _mxnet-neuron-rn:
Apache MXNet Neuron (Incubating) Release Notes
==============================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for MXNet-Neuron framework.
Apache MXNet Neuron release [1.8.0.2.4.10.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
Apache MXNet Neuron release [1.8.0.2.4.9.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
Apache MXNet Neuron release [1.8.0.2.4.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 5/1/2023
New in this release
-------------------
* Updated Neuron Runtime library to version 2.12
* Added missing LICENSE.txt
Known Issues and Limitations
----------------------------
* Bert-base in 16 NeuronCores pipeline mode has 50% lower performance when running 16 inferences in parallel with Runtime version 2.12.
[1.5.1.1.10.39.0]
^^^^^^^^^^^^^^^^^
Date: 5/1/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
This is the last released version. Please use neuron-cc version 1.15.0 only for this mxnet-neuron version. Also, this version is limited to python 3.9 or below only.
.. code:: bash
python -m pip install mxnet_neuron==1.5.1.* neuron-cc==1.15.0
Apache MXNet Neuron release [1.8.0.2.2.127.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 3/28/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
[1.5.1.1.10.37.0]
^^^^^^^^^^^^^^^^^
Date: 3/28/2023
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
Apache MXNet Neuron release [1.8.0.2.2.43.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
[1.5.1.1.10.11.0]
^^^^^^^^^^^^^^^^^
Date: 11/23/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
[1.5.1.1.10.0.0]
^^^^^^^^^^^^^^^^
Date: 04/28/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
Apache MXNet Neuron release [1.8.0.2.2.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
New in this release
-------------------
* Added support for unloading models from a NeuronDevice by deleting the model instance in user application. Users can now call ``del`` in Python on an executor and to unload the model from a NeuronDevice (provided the deleted executor is the last executor pointing to the given model). This requires the latest ``aws-mx-1.8`` package from ``https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl``.
Bug fixes
---------
* Fixed a memory leak caused by stale unloaded models in NeuronDevice memory. For this fix to take effect please install aws-mx package from https://aws-mx-pypi.s3.us-west-2.amazonaws.com/1.8.0/aws_mx-1.8.0.2-py2.py3-none-manylinux2014_x86_64.whl along with the latest mx-neuron package.
[1.5.1.1.9.0.0]
^^^^^^^^^^^^^^^
Date: 03/25/2022
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
Apache MXNet Neuron release [1.8.0.2.1.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
New in this release
-------------------
* Added support of ``mx_neuron.__version__`` to get the build version of MXNet Neuron plugin
Bug fixes
---------
* Fixed assertion errors when inference was completed with NaNs. The expected behavior is to complete inference successfully and warn the
user that ``NaN``s were seen during the current inference.
* Fixed compile issue when individual output nodes have multiple output nodes. Because the output index was being dropped, fewer number
of output feature maps were being considered and that caused failures during inference.
Apache MXNet Neuron release [1.8.0.2.0.276.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
Apache MXNet Neuron release [1.8.0.2.0.271.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date 10/27/2021
New in this release
-------------------
- MXNet Neuron 1.8 now support Neuron Runtime 2.x (``libnrt.so`` shared library) only.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
- Introducing Flexible Execution Groups (FlexEG) feature. See :ref:`flexeg` application note.
Resolved Issues
---------------
- Fixed a bug that prevented compilation of gluon models with multiple
cpu and neuron nodes.
- Added more debug logic to help with profiling of model load timing.
[1.5.1.1.7.0.0]
^^^^^^^^^^^^^^^
Date 10/27/2021
New in this release
-------------------
- MXNet 1.5 enters maintenance mode. Please visit :ref:`maintenance_mxnet_1_5` for more
information.
Resolved Issues
---------------
- Minor bug fixes.
[1.5.1.1.6.5.0]
^^^^^^^^^^^^^^^
Date 08/12/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
[1.8.0.1.3.4.0]
^^^^^^^^^^^^^^^
Date 08/12/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron.
[1.5.1.1.6.1.0]
^^^^^^^^^^^^^^^
Date 07/02/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.5 Neuron.
[1.8.0.1.3.0.0]
^^^^^^^^^^^^^^^
Date 07/02/2021
Summary
-------
Support for Autoloop, Cpredict API and minor bug fixes and enhancements for MXNet 1.8 Neuron.
Major New Features
------------------
- Added support for Autoloop feature for MXNet 1.8 Neuron.
Resolved Issues
---------------
- Added support for CPredict API.
[1.8.0.1.2.1.0]
^^^^^^^^^^^^^^^
Date 5/28/2021
Summary
-------
Minor bug fixes and enhancements for MXNet 1.8 Neuron
Resolved Issues
---------------
- Added support for Neuron profiler
[1.8.0.1.1.2.0]
^^^^^^^^^^^^^^^
Date 4/30/2021
Summary
-------
Initial release of Apache MXNet (Incubating) 1.8 for Neuron
Major New Features
------------------
- Gluon API and Neuron support for NLP BERT models
- Neuron is now a plugin
- Please note new API changes to support plugin mode: :ref:`ref-mxnet-neuron-compilation-python-api`
[1.5.1.1.4.x.x]
^^^^^^^^^^^^^^^
Date 5/28/2021
Summary
-------
- Minor enhancements.
[1.5.1.1.4.4.0]
^^^^^^^^^^^^^^^
Date 4/30/2021
Summary
-------
- Resolve an issue with Neuron profiling.
Resolved Issues
---------------
- Issue: when Neuron profiling is enabled in MXNet-Neuron 1.5.1 (using NEURON_PROFILE=<dir>), and TensorBoard is used to read in the profiled data, user would see an error messsage "panic: runtime error: index out of range". This issue is resolved in this release.
[1.5.1.1.3.8.0]
^^^^^^^^^^^^^^^
Date 3/4/2021
Summary
-------
Minor enhancements.
[1.5.1.1.3.7.0]
^^^^^^^^^^^^^^^
Date 2/24/2021
Summary
-------
Fix for CVE-2021-3177.
[1.5.1.1.3.2.0]
^^^^^^^^^^^^^^^
Date 1/30/2021
Summary
-------
Various minor improvements
[1.5.1.1.2.1.0]
^^^^^^^^^^^^^^^
Date 12/23/2020
Summary
-------
Various minor improvements
[1.5.1.1.1.88.0]
^^^^^^^^^^^^^^^^
Date 11/17/2020
Summary
-------
This release includes the bug fix for MXNet Model Server not being able to clean up
Neuron RTD states after model is unloaded (deleted) from model server.
Resolved Issues
---------------
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server.
- Workaround for earlier versions: run “\ ``/opt/aws/neuron/bin/neuron-cli reset``\ “ to
clear Neuron RTD states after all models are unloaded and server is
shut down.
[1.5.1.1.1.52.0]
^^^^^^^^^^^^^^^^
Date 09/22/2020
Summary
-------
Various minor improvements.
Major New Features
------------------
Resolved Issues
---------------
- Issue: When first importing MXNet into python process and subprocess
call is invoked, user may get an OSError exception "OSError: [Errno
14] Bad address" during subprocess call (see
https://github.com/apache/incubator-mxnet/issues/13875 for more
details). This issue is fixed with a mitigation patch from MXNet for
Open-MP fork race conditions.
- Workaround for earlier versions: Export KMP_INIT_AT_FORK=false
before running python process.
.. _1511110:
[1.5.1.1.1.1.0]
^^^^^^^^^^^^^^^
Date 08/08/2020
.. _summary-1:
Summary
-------
Various minor improvements.
.. _major-new-features-1:
Major New Features
------------------
.. _resolved-issues-1:
Resolved Issues
---------------
.. _1511021010:
[1.5.1.1.0.2101.0]
^^^^^^^^^^^^^^^^^^
Date 08/05/2020
.. _summary-2:
Summary
-------
Various minor improvements.
.. _major-new-features-2:
Major New Features
------------------
.. _resolved-issues-2:
Resolved Issues
---------------
.. _1511020930:
[1.5.1.1.0.2093.0]
^^^^^^^^^^^^^^^^^^
Date 07/16/2020
.. _summary-3:
Summary
-------
This release contains a few bug fixes and user experience improvements.
.. _major-new-features-3:
Major New Features
------------------
.. _resolved-issues-3:
Resolved Issues
---------------
- User can specify NEURONCORE_GROUP_SIZES without brackets (for
example, "1,1,1,1"), as can be done in TensorFlow-Neuron and
PyTorch-Neuron.
- Fixed a memory leak when inferring neuron subgraph properties
- Fixed a bug dealing with multi-input subgraphs
.. _1511020330:
[1.5.1.1.0.2033.0]
^^^^^^^^^^^^^^^^^^
Date 6/11/2020
.. _summary-4:
Summary
-------
- Added support for profiling during inference
.. _major-new-features-4:
Major New Features
------------------
- Profiling can now be enabled by specifying the profiling work
directory using NEURON_PROFILE environment variable during inference.
For an example of using profiling, see :ref:`tensorboard-neuron`.
(Note that graph view of MXNet graph is not available via
TensorBoard).
.. _resolved-issues-4:
Resolved Issues
---------------
Known Issues and Limitations
----------------------------
Other Notes
-----------
.. _1511019000:
[1.5.1.1.0.1900.0]
^^^^^^^^^^^^^^^^^^
Date 5/11/2020
.. _summary-5:
Summary
-------
Improved support for shared-memory communication with Neuron-Runtime.
.. _major-new-features-5:
Major New Features
------------------
- Added support for the BERT-Base model (base: L-12 H-768 A-12), max
sequence length 64 and batch size of 8.
- Improved security for usage of shared-memory for data transfer
between framework and Neuron-Runtime
- Improved allocation and cleanup of shared-memory resource
- Improved container support by automatic falling back to GRPC data
transfer if shared-memory cannot be allocated by Neuron-Runtime
.. _resolved-issues-5:
Resolved Issues
---------------
- User is unable to allocate Neuron-Runtime shared-memory resource when
using MXNet-Neuron in a container to communicate with Neuron-Runtime
in another container. This is resolved by automatic falling back to
GRPC data transfer if shared-memory cannot be allocated by
Neuron-Runtime.
- Fixed issue where some large models could not be loaded on
inferentia.
.. _known-issues-and-limitations-1:
Known Issues and Limitations
----------------------------
.. _other-notes-1:
Other Notes
-----------
.. _1511015960:
[1.5.1.1.0.1596.0]
^^^^^^^^^^^^^^^^^^
Date 3/26/2020
.. _summary-6:
Summary
-------
No major changes or fixes
.. _major-new-features-6:
Major New Features
------------------
.. _resolved-issues-6:
Resolved Issues
---------------
.. _known-issues-and-limitations-2:
Known Issues and Limitations
----------------------------
.. _other-notes-2:
Other Notes
-----------
.. _1511014980:
[1.5.1.1.0.1498.0]
^^^^^^^^^^^^^^^^^^
Date 2/27/2020
.. _summary-7:
Summary
-------
No major changes or fixes.
.. _major-new-features-7:
Major New Features
------------------
.. _resolved-issues-7:
Resolved Issues
---------------
The issue(s) below are resolved:
- Latest pip version 20.0.1 breaks installation of MXNet-Neuron pip
wheel which has py2.py3 in the wheel name.
.. _known-issues-and-limitations-3:
Known Issues and Limitations
----------------------------
- User is unable to allocate Neuron-Runtime shared-memory resource when
using MXNet-Neuron in a container to communicate with Neuron-Runtime
in another container. To work-around, please set environment variable
NEURON_RTD_USE_SHM to 0.
.. _other-notes-3:
Other Notes
-----------
.. _1511014010:
[1.5.1.1.0.1401.0]
^^^^^^^^^^^^^^^^^^
Date 1/27/2020
.. _summary-8:
Summary
-------
No major changes or fixes.
.. _major-new-features-8:
Major New Features
------------------
.. _resolved-issues-8:
Resolved Issues
---------------
- The following issue is resolved when the latest multi-model-server
with version >= 1.1.0 is used with MXNet-Neuron. You would still need
to use "``/opt/aws/neuron/bin/neuron-cli reset``" to clear all Neuron
RTD states after multi-model-server is exited:
- Issue: MXNet Model Server is not able to clean up Neuron RTD
states after model is unloaded (deleted) from model server and
previous workaround "``/opt/aws/neuron/bin/neuron-cli reset``" is
unable to clear all Neuron RTD states.
.. _known-issues-and-limitations-4:
Known Issues and Limitations
----------------------------
- Latest pip version 20.0.1 breaks installation of MXNet-Neuron pip
wheel which has py2.py3 in the wheel name. This breaks all existing
released versions. The error looks like:
::
Looking in indexes: https://pypi.org/simple, https://pip.repos.neuron.amazonaws.com
ERROR: Could not find a version that satisfies the requirement mxnet-neuron (from versions: none)
ERROR: No matching distribution found for mxnet-neuron
- Work around: install the older version of pip using "pip install
pip==19.3.1".
.. _other-notes-4:
Other Notes
-----------
.. _1511013250:
[1.5.1.1.0.1325.0]
^^^^^^^^^^^^^^^^^^
Date 12/1/2019
.. _summary-9:
Summary
-------
.. _major-new-features-9:
Major New Features
------------------
.. _resolved-issues-9:
Resolved Issues
---------------
- Issue: Compiler flags cannot be passed to compiler during compile
call. The fix: compiler flags can be passed to compiler during
compile call using “flags” option followed by a list of flags.
- Issue: Advanced CPU fallback option is a way to attempt to improve
the number of operators on Inferentia. The default is currently set
to on, which may cause failures. The fix: This option is now off by
default.
.. _known-issues-and-limitations-5:
Known Issues and Limitations
----------------------------
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server and previous
workaround "``/opt/aws/neuron/bin/neuron-cli reset``" is unable to
clear all Neuron RTD states.
- Workaround: run “\ ``sudo systemctl restart neuron-rtd``\ “ to
clear Neuron RTD states after all models are unloaded and server
is shut down.
.. _other-notes-5:
Other Notes
-----------
.. _1511013490:
[1.5.1.1.0.1349.0]
^^^^^^^^^^^^^^^^^^
Date 12/20/2019
.. _summary-10:
Summary
-------
No major changes or fixes. Released with other Neuron packages.
.. _1511013250-1:
[1.5.1.1.0.1325.0]
^^^^^^^^^^^^^^^^^^
Date 12/1/2019
.. _summary-11:
Summary
-------
.. _major-new-features-10:
Major New Features
------------------
.. _resolved-issues-10:
Resolved Issues
---------------
- Issue: Compiler flags cannot be passed to compiler during compile
call. The fix: compiler flags can be passed to compiler during
compile call using “flags” option followed by a list of flags.
- Issue: Advanced CPU fallback option is a way to attempt to improve
the number of operators on Inferentia. The default is currently set
to on, which may cause failures. The fix: This option is now off by
default.
.. _known-issues-and-limitations-6:
Known Issues and Limitations
----------------------------
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server and previous
workaround "``/opt/aws/neuron/bin/neuron-cli reset``" is unable to
clear all Neuron RTD states.
- Workaround: run “\ ``sudo systemctl restart neuron-rtd``\ “ to
clear Neuron RTD states after all models are unloaded and server
is shut down.
.. _other-notes-6:
Other Notes
-----------
.. _1511012600:
[1.5.1.1.0.1260.0]
^^^^^^^^^^^^^^^^^^
Date: 11/25/2019
.. _summary-12:
Summary
-------
This version is available only in released DLAMI v26.0 and is based on
MXNet version 1.5.1. Please :ref:`dlami-rn-known-issues` to latest version.
.. _major-new-features-11:
Major new features
------------------
.. _resolved-issues-11:
Resolved issues
---------------
.. _known-issues-and-limitations-7:
Known issues and limitations
----------------------------
- Issue: Compiler flags cannot be passed to compiler during compile
call.
- Issue: Advanced CPU fallback option is a way to attempt to improve
the number of operators on Inferentia. The default is currently set
to on, which may cause failures.
- Workaround: explicitly turn it off by setting compile option
op_by_op_compiler_retry to 0.
- Issue: Temporary files are put in current directory when debug is
enabled.
- Workaround: create a separate work directory and run the process
from within the work directory
- Issue: MXNet Model Server is not able to clean up Neuron RTD states
after model is unloaded (deleted) from model server.
- Workaround: run “\ ``/opt/aws/neuron/bin/neuron-cli reset``\ “ to
clear Neuron RTD states after all models are unloaded and server
is shut down.
- Issue: MXNet 1.5.1 may return inconsistent node names for some
operators when they are the primary outputs of a Neuron subgraph.
This causes failures during inference.
- Workaround : Use the ``excl_node_names`` compilation option to
change the partitioning of the graph during compile so that these
nodes are not the primary output of a neuron subgraph. See
:ref:`ref-mxnet-neuron-compilation-python-api`
.. code:: python
compile_args = { 'excl_node_names': ["node_name_to_exclude"] }
Models Supported
----------------
The following models have successfully run on neuron-inferentia systems
1. Resnet50 V1/V2
2. Inception-V2/V3/V4
3. Parallel-WaveNet
4. Tacotron 2
5. WaveRNN
.. _other-notes-7:
Other Notes
-----------
- Python versions supported:
- 3.5, 3.6, 3.7
- Linux distribution supported:
- Ubuntu 18, Amazon Linux 2
</pre></body></html>
|
2023-09-29T20:54:55.062Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/transformers-neuronx/setup/index.rst.txt
|
```
.. _transformers-neuronx-setup:
Transformers Neuron Setup (``transformers-neuronx``)
====================================================
To install the most rigorously tested stable release, use the PyPI pip wheel:
::
pip install transformers-neuronx --extra-index-url=https://pip.repos.neuron.amazonaws.com
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _transformers-neuronx-setup:
Transformers Neuron Setup (``transformers-neuronx``)
====================================================
To install the most rigorously tested stable release, use the PyPI pip wheel:
::
pip install transformers-neuronx --extra-index-url=https://pip.repos.neuron.amazonaws.com
</pre></body></html>
|
2023-09-29T20:54:55.068Z
|
|
Using Data Parallel Mode with Gluon MXNet — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/mxnet/data_parallel/data_parallel_tutorial.html
|
# Using Data Parallel Mode with Gluon MXNet — AWS Neuron Documentation
## Using Data Parallel Mode with Gluon MXNet[#](#Using-Data-Parallel-Mode-with-Gluon-MXNet "Permalink to this headline")
In this tutorial, you will compile a Gluon BERT model and run in data-parallel mode to completely utilize the NeuronCores. Here you will benchmark a multi-worker setup and compare it with a single worker.
This tutorial is intended only for MXNet-1.8.
In this tutorial, we will be using an inf1.2xlarge with the latest AWS Deep Learning AMI (DLAMI). The inf1.2xlarge instance has 1 AWS Inferentia Chip with 4 NeuronCores.
## Setting up your environment[#](#Setting-up-your-environment "Permalink to this headline")
To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at [MXNet Setup Guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#develop-on-aws-ml-accelerator-instance). You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)
## Install dependencies[#](#Install-dependencies "Permalink to this headline")
We have to install gluon-nlp to get the BERT model. Run the following command to install:
```
!python -m pip install gluonnlp
```
## Compiling BERT Model[#](#Compiling-BERT-Model "Permalink to this headline")
Next, we compile the Gluon BERT model and save it. Once the model is compiled, we use the same model across the entire tutorial. In this tutorial, we will be using a BERT model with sequence length 32
```
import os
import mxnet as mx
import mx_neuron
import gluonnlp as nlp
```
```
BERT_MODEL = 'bert_12_768_12'
BERT_DATA = 'book_corpus_wiki_en_uncased'
batch_size = 1
seq_len = 32
num_cores = 1
dtype = 'float32'
compiled_model_path = '{}.compiled.{}.{}'.format(BERT_MODEL, batch_size, seq_len)
model, vocab = nlp.model.get_model(BERT_MODEL,
dataset_name=BERT_DATA,
use_classifier=False,
use_decoder=False, ctx=mx.cpu())
# Create sample inputs for compilation
words = mx.nd.ones([batch_size, seq_len], name='words', dtype=dtype)
valid_len = mx.nd.ones([batch_size,], name='valid_len', dtype=dtype)
segments = mx.nd.ones([batch_size, seq_len], name='segments', dtype=dtype)
inputs = {'data0': words, 'data1': segments, 'data2': valid_len}
# Compiler Args ~~
options = {}
embeddingNames = ['bertmodel0_word_embed_embedding0_fwd', 'bertmodel0_token_type_embed_embedding0_fwd', 'bertencoder0_embedding0']
options.update({'force_incl_node_names': embeddingNames})
options.update({'flags': ['--fp32-cast matmult']})
# Compile and save ~~
model = mx_neuron.compile(model, inputs=inputs, **options)
model.export(compiled_model_path)
```
## Data Parallel Mode[#](#Data-Parallel-Mode "Permalink to this headline")
Data Parallel Mode is a setup in which you launch multiple copies of the same model, such that each model is running independently of the other. In other words, each model has its own resources to run inference.
On an inf1.2xlarge instance, we have 4 NeuronCores. Hence, we can launch 4 models such that each model is loaded on a single NeuronCore. This unables us to process 4 request concurrently without linear increase in latency. As a result, the throughput of the system increases when compared to a single model inference. This would also allow us to utilize all the 4 NeuronCores on the instance.
Run through the next set of cells to see the difference in throughput as we scale from one model to 4 models running in parallel.
```
import numpy as np
def get_sample_inputs(batch_size, seq_len):
words = np.ones([batch_size, seq_len], dtype=np.float32)
valid_len = np.ones([batch_size,], dtype=np.float32)
segments = np.ones([batch_size, seq_len], dtype=np.float32)
inputs = {'data0': words, 'data1': segments, 'data2': valid_len}
return inputs
```
Next for comparison purposes, we run the setup with 1 worker. To do this, we set the num\_cores=1. This would launch only 1 model running on a single NeuronCore. After running the below cell, note down the latency and throughput for the system
```
from parallel import NeuronSimpleDataParallel
from benchmark_utils import Results
import time
import functools
import os
import numpy as np
import warnings
num_cores = 1
batch_size=1
# Each worker process should use one core, hence we set
# os.environ['NEURON_RT_NUM_CORES'] = "1"
os.environ["NEURON_RT_NUM_CORES"] = "1"
#Result aggregation class (code in bert_benchmark_utils.py)
results = Results(batch_size, num_cores)
def result_handler(output, start, end):
elapsed = end - start
results.add_result([elapsed], [end], [start])
inputs = get_sample_inputs(batch_size, seq_len)
parallel_neuron_model = NeuronSimpleDataParallel(compiled_model_path, num_cores, inputs)
#Starting the inference threads
parallel_neuron_model.start_continuous_inference()
# Warm up the cores
for _ in range(num_cores*4):
parallel_neuron_model.warmup(inputs)
# Need to run for high number of iterations to benchmark the models
for _ in range(1000):
parallel_neuron_model.infer(inputs)
# Passing the result_handler as a callback function
parallel_neuron_model.add_result(result_handler)
# Stop inference
parallel_neuron_model.stop()
# Since we are using a multi-process execution with a shared queue, some inferences
# may still be in execution phase. Hence we need to wait till all the inputs are processed
# add_all_results() will collect all the results of requests which are in this state
parallel_neuron_model.add_all_results(result_handler)
with open("benchmark.txt", "w") as f:
results.report(f, window_size=1)
with open("benchmark.txt", "r") as f:
for line in f:
print(line)
```
Now we run the setup with 4 workers. To do this, we set the num\_cores=4. This would launch 4 model running each running on individual NeuronCore. All the 4 models are running in individual processes, in other words the models are running in parallel.
To feed the models efficiently, we use the producer-consumer setup, in which all processes running a model act as consumers. All consumers are fed using a sharing input queue.
Now we run the below setup. You may notice, that the throughput increase by >2x when compared to a single worker setup.
```
from parallel import NeuronSimpleDataParallel
from benchmark_utils import Results
import time
import functools
import os
import numpy as np
num_cores = 4
batch_size=1
os.environ["NEURON_RT_NUM_CORES"] = "1"
#Result aggregation class (code in bert_benchmark_utils.py)
results = Results(batch_size, num_cores)
def result_handler(output, start, end):
elapsed = end - start
results.add_result([elapsed], [end], [start])
inputs = get_sample_inputs(batch_size, seq_len)
parallel_neuron_model = NeuronSimpleDataParallel(compiled_model_path, num_cores, inputs)
#Starting the inference threads
parallel_neuron_model.start_continuous_inference()
# Warm up the cores
for _ in range(num_cores*4):
parallel_neuron_model.warmup(inputs)
# Need to run for high number of iterations to benchmark the models
for _ in range(5000):
parallel_neuron_model.infer(inputs)
# Passing the result_handler as a callback function
parallel_neuron_model.add_result(result_handler)
# Stop inference
parallel_neuron_model.stop()
# Since we are using a multi-process execution with a shared queue, some inferences
# may still be in execution phase. Hence we need to wait till all the inputs are processed
# add_all_results() will collect all the results of requests which are in this state
parallel_neuron_model.add_all_results(result_handler)
with open("benchmark.txt", "w") as f:
results.report(f, window_size=1)
with open("benchmark.txt", "r") as f:
for line in f:
print(line)
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Using Data Parallel Mode with Gluon MXNet — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/mxnet/data_parallel/data_parallel_tutorial", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/mxnet/data_parallel/data_parallel_tutorial.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Setting-up-your-environment">
Setting up your environment
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-dependencies">
Install dependencies
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compiling-BERT-Model">
Compiling BERT Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Data-Parallel-Mode">
Data Parallel Mode
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Using Data Parallel Mode with Gluon MXNet</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Setting-up-your-environment">
Setting up your environment
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Install-dependencies">
Install dependencies
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compiling-BERT-Model">
Compiling BERT Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Data-Parallel-Mode">
Data Parallel Mode
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Using-Data-Parallel-Mode-with-Gluon-MXNet">
<h1>Using Data Parallel Mode with Gluon MXNet<a class="headerlink" href="#Using-Data-Parallel-Mode-with-Gluon-MXNet" title="Permalink to this headline">#</a></h1>
<p>In this tutorial, you will compile a Gluon BERT model and run in data-parallel mode to completely utilize the NeuronCores. Here you will benchmark a multi-worker setup and compare it with a single worker.</p>
<p>This tutorial is intended only for MXNet-1.8.</p>
<p>In this tutorial, we will be using an inf1.2xlarge with the latest AWS Deep Learning AMI (DLAMI). The inf1.2xlarge instance has 1 AWS Inferentia Chip with 4 NeuronCores.</p>
<div class="section" id="Setting-up-your-environment">
<h2>Setting up your environment<a class="headerlink" href="#Setting-up-your-environment" title="Permalink to this headline">#</a></h2>
<p>To run this tutorial, please make sure you deactivate any existing MXNet conda environments you already using. Install MXNet 1.8 by following the instructions at <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-intro/mxnet-setup/mxnet-install.html#develop-on-aws-ml-accelerator-instance">MXNet Setup Guide</a>. You would also need to change your kernel to use the correct Python environment setup earlier by clicking Kerenel->Change Kernel->Python (Neuron MXNet)</p>
</div>
<div class="section" id="Install-dependencies">
<h2>Install dependencies<a class="headerlink" href="#Install-dependencies" title="Permalink to this headline">#</a></h2>
<p>We have to install gluon-nlp to get the BERT model. Run the following command to install:</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="o">!</span>python<span class="w"> </span>-m<span class="w"> </span>pip<span class="w"> </span>install<span class="w"> </span>gluonnlp
</pre></div>
</div>
</div>
</div>
<div class="section" id="Compiling-BERT-Model">
<h2>Compiling BERT Model<a class="headerlink" href="#Compiling-BERT-Model" title="Permalink to this headline">#</a></h2>
<p>Next, we compile the Gluon BERT model and save it. Once the model is compiled, we use the same model across the entire tutorial. In this tutorial, we will be using a BERT model with sequence length 32</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span>
<span class="kn">import</span> <span class="nn">mx_neuron</span>
<span class="kn">import</span> <span class="nn">gluonnlp</span> <span class="k">as</span> <span class="nn">nlp</span>
</pre></div>
</div>
</div>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">BERT_MODEL</span> <span class="o">=</span> <span class="s1">'bert_12_768_12'</span>
<span class="n">BERT_DATA</span> <span class="o">=</span> <span class="s1">'book_corpus_wiki_en_uncased'</span>
<span class="n">batch_size</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">seq_len</span> <span class="o">=</span> <span class="mi">32</span>
<span class="n">num_cores</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">dtype</span> <span class="o">=</span> <span class="s1">'float32'</span>
<span class="n">compiled_model_path</span> <span class="o">=</span> <span class="s1">'</span><span class="si">{}</span><span class="s1">.compiled.</span><span class="si">{}</span><span class="s1">.</span><span class="si">{}</span><span class="s1">'</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">BERT_MODEL</span><span class="p">,</span> <span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>
<span class="n">model</span><span class="p">,</span> <span class="n">vocab</span> <span class="o">=</span> <span class="n">nlp</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">get_model</span><span class="p">(</span><span class="n">BERT_MODEL</span><span class="p">,</span>
<span class="n">dataset_name</span><span class="o">=</span><span class="n">BERT_DATA</span><span class="p">,</span>
<span class="n">use_classifier</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="n">use_decoder</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span> <span class="n">ctx</span><span class="o">=</span><span class="n">mx</span><span class="o">.</span><span class="n">cpu</span><span class="p">())</span>
<span class="c1"># Create sample inputs for compilation</span>
<span class="n">words</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'words'</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">)</span>
<span class="n">valid_len</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="n">batch_size</span><span class="p">,],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'valid_len'</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">)</span>
<span class="n">segments</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'segments'</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="n">dtype</span><span class="p">)</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'data0'</span><span class="p">:</span> <span class="n">words</span><span class="p">,</span> <span class="s1">'data1'</span><span class="p">:</span> <span class="n">segments</span><span class="p">,</span> <span class="s1">'data2'</span><span class="p">:</span> <span class="n">valid_len</span><span class="p">}</span>
<span class="c1"># Compiler Args ~~</span>
<span class="n">options</span> <span class="o">=</span> <span class="p">{}</span>
<span class="n">embeddingNames</span> <span class="o">=</span> <span class="p">[</span><span class="s1">'bertmodel0_word_embed_embedding0_fwd'</span><span class="p">,</span> <span class="s1">'bertmodel0_token_type_embed_embedding0_fwd'</span><span class="p">,</span> <span class="s1">'bertencoder0_embedding0'</span><span class="p">]</span>
<span class="n">options</span><span class="o">.</span><span class="n">update</span><span class="p">({</span><span class="s1">'force_incl_node_names'</span><span class="p">:</span> <span class="n">embeddingNames</span><span class="p">})</span>
<span class="n">options</span><span class="o">.</span><span class="n">update</span><span class="p">({</span><span class="s1">'flags'</span><span class="p">:</span> <span class="p">[</span><span class="s1">'--fp32-cast matmult'</span><span class="p">]})</span>
<span class="c1"># Compile and save ~~</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">mx_neuron</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">model</span><span class="p">,</span> <span class="n">inputs</span><span class="o">=</span><span class="n">inputs</span><span class="p">,</span> <span class="o">**</span><span class="n">options</span><span class="p">)</span>
<span class="n">model</span><span class="o">.</span><span class="n">export</span><span class="p">(</span><span class="n">compiled_model_path</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Data-Parallel-Mode">
<h2>Data Parallel Mode<a class="headerlink" href="#Data-Parallel-Mode" title="Permalink to this headline">#</a></h2>
<p>Data Parallel Mode is a setup in which you launch multiple copies of the same model, such that each model is running independently of the other. In other words, each model has its own resources to run inference.</p>
<p>On an inf1.2xlarge instance, we have 4 NeuronCores. Hence, we can launch 4 models such that each model is loaded on a single NeuronCore. This unables us to process 4 request concurrently without linear increase in latency. As a result, the throughput of the system increases when compared to a single model inference. This would also allow us to utilize all the 4 NeuronCores on the instance.</p>
<p>Run through the next set of cells to see the difference in throughput as we scale from one model to 4 models running in parallel.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="k">def</span> <span class="nf">get_sample_inputs</span><span class="p">(</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">):</span>
<span class="n">words</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
<span class="n">valid_len</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="n">batch_size</span><span class="p">,],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
<span class="n">segments</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">],</span> <span class="n">dtype</span><span class="o">=</span><span class="n">np</span><span class="o">.</span><span class="n">float32</span><span class="p">)</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'data0'</span><span class="p">:</span> <span class="n">words</span><span class="p">,</span> <span class="s1">'data1'</span><span class="p">:</span> <span class="n">segments</span><span class="p">,</span> <span class="s1">'data2'</span><span class="p">:</span> <span class="n">valid_len</span><span class="p">}</span>
<span class="k">return</span> <span class="n">inputs</span>
</pre></div>
</div>
</div>
<p>Next for comparison purposes, we run the setup with 1 worker. To do this, we set the num_cores=1. This would launch only 1 model running on a single NeuronCore. After running the below cell, note down the latency and throughput for the system</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">parallel</span> <span class="kn">import</span> <span class="n">NeuronSimpleDataParallel</span>
<span class="kn">from</span> <span class="nn">benchmark_utils</span> <span class="kn">import</span> <span class="n">Results</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">functools</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">warnings</span>
<span class="n">num_cores</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span>
<span class="c1"># Each worker process should use one core, hence we set</span>
<span class="c1"># os.environ['NEURON_RT_NUM_CORES'] = "1"</span>
<span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s2">"NEURON_RT_NUM_CORES"</span><span class="p">]</span> <span class="o">=</span> <span class="s2">"1"</span>
<span class="c1">#Result aggregation class (code in bert_benchmark_utils.py)</span>
<span class="n">results</span> <span class="o">=</span> <span class="n">Results</span><span class="p">(</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">num_cores</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">result_handler</span><span class="p">(</span><span class="n">output</span><span class="p">,</span> <span class="n">start</span><span class="p">,</span> <span class="n">end</span><span class="p">):</span>
<span class="n">elapsed</span> <span class="o">=</span> <span class="n">end</span> <span class="o">-</span> <span class="n">start</span>
<span class="n">results</span><span class="o">.</span><span class="n">add_result</span><span class="p">([</span><span class="n">elapsed</span><span class="p">],</span> <span class="p">[</span><span class="n">end</span><span class="p">],</span> <span class="p">[</span><span class="n">start</span><span class="p">])</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">get_sample_inputs</span><span class="p">(</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>
<span class="n">parallel_neuron_model</span> <span class="o">=</span> <span class="n">NeuronSimpleDataParallel</span><span class="p">(</span><span class="n">compiled_model_path</span><span class="p">,</span> <span class="n">num_cores</span><span class="p">,</span> <span class="n">inputs</span><span class="p">)</span>
<span class="c1">#Starting the inference threads</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">start_continuous_inference</span><span class="p">()</span>
<span class="c1"># Warm up the cores</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_cores</span><span class="o">*</span><span class="mi">4</span><span class="p">):</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">warmup</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span>
<span class="c1"># Need to run for high number of iterations to benchmark the models</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">1000</span><span class="p">):</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">infer</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span>
<span class="c1"># Passing the result_handler as a callback function</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">add_result</span><span class="p">(</span><span class="n">result_handler</span><span class="p">)</span>
<span class="c1"># Stop inference</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">stop</span><span class="p">()</span>
<span class="c1"># Since we are using a multi-process execution with a shared queue, some inferences</span>
<span class="c1"># may still be in execution phase. Hence we need to wait till all the inputs are processed</span>
<span class="c1"># add_all_results() will collect all the results of requests which are in this state</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">add_all_results</span><span class="p">(</span><span class="n">result_handler</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"benchmark.txt"</span><span class="p">,</span> <span class="s2">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">results</span><span class="o">.</span><span class="n">report</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">window_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"benchmark.txt"</span><span class="p">,</span> <span class="s2">"r"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="k">for</span> <span class="n">line</span> <span class="ow">in</span> <span class="n">f</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="n">line</span><span class="p">)</span>
</pre></div>
</div>
</div>
<p>Now we run the setup with 4 workers. To do this, we set the num_cores=4. This would launch 4 model running each running on individual NeuronCore. All the 4 models are running in individual processes, in other words the models are running in parallel.</p>
<p>To feed the models efficiently, we use the producer-consumer setup, in which all processes running a model act as consumers. All consumers are fed using a sharing input queue.</p>
<p>Now we run the below setup. You may notice, that the throughput increase by >2x when compared to a single worker setup.</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">parallel</span> <span class="kn">import</span> <span class="n">NeuronSimpleDataParallel</span>
<span class="kn">from</span> <span class="nn">benchmark_utils</span> <span class="kn">import</span> <span class="n">Results</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">functools</span>
<span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="n">num_cores</span> <span class="o">=</span> <span class="mi">4</span>
<span class="n">batch_size</span><span class="o">=</span><span class="mi">1</span>
<span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s2">"NEURON_RT_NUM_CORES"</span><span class="p">]</span> <span class="o">=</span> <span class="s2">"1"</span>
<span class="c1">#Result aggregation class (code in bert_benchmark_utils.py)</span>
<span class="n">results</span> <span class="o">=</span> <span class="n">Results</span><span class="p">(</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">num_cores</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">result_handler</span><span class="p">(</span><span class="n">output</span><span class="p">,</span> <span class="n">start</span><span class="p">,</span> <span class="n">end</span><span class="p">):</span>
<span class="n">elapsed</span> <span class="o">=</span> <span class="n">end</span> <span class="o">-</span> <span class="n">start</span>
<span class="n">results</span><span class="o">.</span><span class="n">add_result</span><span class="p">([</span><span class="n">elapsed</span><span class="p">],</span> <span class="p">[</span><span class="n">end</span><span class="p">],</span> <span class="p">[</span><span class="n">start</span><span class="p">])</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="n">get_sample_inputs</span><span class="p">(</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">seq_len</span><span class="p">)</span>
<span class="n">parallel_neuron_model</span> <span class="o">=</span> <span class="n">NeuronSimpleDataParallel</span><span class="p">(</span><span class="n">compiled_model_path</span><span class="p">,</span> <span class="n">num_cores</span><span class="p">,</span> <span class="n">inputs</span><span class="p">)</span>
<span class="c1">#Starting the inference threads</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">start_continuous_inference</span><span class="p">()</span>
<span class="c1"># Warm up the cores</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_cores</span><span class="o">*</span><span class="mi">4</span><span class="p">):</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">warmup</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span>
<span class="c1"># Need to run for high number of iterations to benchmark the models</span>
<span class="k">for</span> <span class="n">_</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">5000</span><span class="p">):</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">infer</span><span class="p">(</span><span class="n">inputs</span><span class="p">)</span>
<span class="c1"># Passing the result_handler as a callback function</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">add_result</span><span class="p">(</span><span class="n">result_handler</span><span class="p">)</span>
<span class="c1"># Stop inference</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">stop</span><span class="p">()</span>
<span class="c1"># Since we are using a multi-process execution with a shared queue, some inferences</span>
<span class="c1"># may still be in execution phase. Hence we need to wait till all the inputs are processed</span>
<span class="c1"># add_all_results() will collect all the results of requests which are in this state</span>
<span class="n">parallel_neuron_model</span><span class="o">.</span><span class="n">add_all_results</span><span class="p">(</span><span class="n">result_handler</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"benchmark.txt"</span><span class="p">,</span> <span class="s2">"w"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">results</span><span class="o">.</span><span class="n">report</span><span class="p">(</span><span class="n">f</span><span class="p">,</span> <span class="n">window_size</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s2">"benchmark.txt"</span><span class="p">,</span> <span class="s2">"r"</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="k">for</span> <span class="n">line</span> <span class="ow">in</span> <span class="n">f</span><span class="p">:</span>
<span class="nb">print</span><span class="p">(</span><span class="n">line</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:55.257Z
|
Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50 — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/src/examples/mxnet/resnet50_neuroncore_groups.html
|
# Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50 — AWS Neuron Documentation
## Contents
- [Introduction:](#Introduction:)
- [Compile model for Neuron](#Compile-model-for-Neuron)
- [Run inference using NeuronCore Groups](#Run-inference-using-NeuronCore-Groups)
- [Troubleshooting](#Troubleshooting)
## Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50[#](#Neuron-Apache-MXNet-(Incubating)---Configurations-for-NeuronCore-Groups-Using-Resnet50 "Permalink to this headline")
## Introduction:[#](#Introduction: "Permalink to this headline")
In this tutorial we will compile and deploy Resnet-50 model in parallel using the concept of NeuronCore Groups on an Inf1 instance. This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger. For simplicity we will run this tutorial on inf1.6xlarge but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs.
Set environment variable NEURON\_RT\_NUM\_CORES to the total number of Neuron cores that will be utilized. The consecutive NeuronCore groups will be created by Neuron Runtime and place the models to the cores according to the compiled size.
Note that in order to map a model to a group, the model must be compiled to fit within the group size. To limit the number of NeuronCores during compilation, use compiler\_args dictionary with field “–neuroncore-pipeline-cores“ set to the group size. For exmaple, if NEURON\_RT\_NUM\_CORES=4 and two models compiled with “–neuroncore-pipeline-cores=3“ and “–neuroncore-pipeline-cores=1“ were loaded, the first model would occupy NC0-2 and the second model would occupy NC3.
```
compile_args = {'--neuroncore-pipeline-cores' : 2}
sym, args, auxs = neuron.compile(sym, args, auxs, inputs, **compile_args)
```
In this tutorial we provide two main sections:
1. Compile the Resnet50 model for Neuron
2. Run inference using NeuronCore Groups
Please use environment `conda_aws_neuron_mxnet_p36`.
## Compile model for Neuron[#](#Compile-model-for-Neuron "Permalink to this headline")
Model must be compiled to Inferentia target before it can be used on Inferentia. In the following we will compile the the flag, –neuroncore-pipeline-cores set to 2 and run it. The files resnet-50\_compiled-0000.params and resnet-50\_compiled-symbol.json will be created in local directory
```
from packaging import version
import mxnet as mx
import numpy as np
import mx_neuron as neuron
path='http://data.mxnet.io/models/imagenet/'
mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params')
mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json')
sym, args, aux = mx.model.load_checkpoint('resnet-50', 0)
# Compile for Inferentia using Neuron, fit to NeuronCore group size of 2
inputs = { "data" : mx.nd.ones([1,3,224,224], name='data', dtype='float32') }
compile_args = {'--neuroncore-pipeline-cores' : 2}
sym, args, aux = neuron.compile(sym, args, aux, inputs, **compile_args)
#save compiled model
mx.model.save_checkpoint("resnet-50_compiled", 0, sym, args, aux)
```
## Run inference using NeuronCore Groups[#](#Run-inference-using-NeuronCore-Groups "Permalink to this headline")
Within the framework, the model can be mapped to specific cores using `ctx=mx.neuron(N)` context where N specifies the index of the Neuron core to deploy. For more information, see [https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/appnotes/perf/flex-eg.html](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/appnotes/perf/flex-eg.html) .
```
import os
import warnings
mx.test_utils.download(path+'synset.txt')
fname = mx.test_utils.download('https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true')
img = mx.image.imread(fname) # convert into format (batch, RGB, width, height)
img = mx.image.imresize(img, 224, 224) # resize
img = img.transpose((2, 0, 1)) # Channel first
img = img.expand_dims(axis=0) # batchify
img = img.astype(dtype='float32')
sym, args, aux = mx.model.load_checkpoint('resnet-50_compiled', 0)
softmax = mx.nd.random_normal(shape=(1,))
args['softmax_label'] = softmax
args['data'] = img
os.environ["NEURON_RT_NUM_CORES"] = '4'
# Inferentia context - group index 1 (size 2) would skip NC0 and place the
# compiled model onto NC1,2
ctx = mx.neuron(1)
exe = sym.bind(ctx=ctx, args=args, aux_states=aux, grad_req='null')
with open('synset.txt', 'r') as f:
labels = [l.rstrip() for l in f]
exe.forward(data=img)
prob = exe.outputs[0].asnumpy()# print the top-5
prob = np.squeeze(prob)
a = np.argsort(prob)[::-1]
for i in a[0:5]:
print('probability=%f, class=%s' %(prob[i], labels[i]))
```
You can experiment with different Neuron core group combinations and different models.
### Troubleshooting[#](#Troubleshooting "Permalink to this headline")
If not enough NeuronCores are provided, an error message will be displayed:
```
mxnet.base.MXNetError: [04:01:39] src/operator/subgraph/neuron/./neuron_util.h:541: Check failed: rsp.status().code() == 0: Failed load model with Neuron-RTD Error. Neuron-RTD Status Code: 9, details: ""
```
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50 — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script>window.MathJax = {"tex": {"inlineMath": [["$", "$"], ["\\(", "\\)"]], "processEscapes": true}, "options": {"ignoreHtmlClass": "tex2jax_ignore|mathjax_ignore|document", "processHtmlClass": "tex2jax_process|mathjax_process|math|output_area"}}</script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "src/examples/mxnet/resnet50_neuroncore_groups", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".ipynb", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<style type="text/css">.CtxtMenu_InfoClose { top:.2em; right:.2em;}
.CtxtMenu_InfoContent { overflow:auto; text-align:left; font-size:80%; padding:.4em .6em; border:1px inset; margin:1em 0px; max-height:20em; max-width:30em; background-color:#EEEEEE; white-space:normal;}
.CtxtMenu_Info.CtxtMenu_MousePost {outline:none;}
.CtxtMenu_Info { position:fixed; left:50%; width:auto; text-align:center; border:3px outset; padding:1em 2em; background-color:#DDDDDD; color:black; cursor:default; font-family:message-box; font-size:120%; font-style:normal; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 15px; /* Opera 10.5 and IE9 */ -webkit-border-radius:15px; /* Safari and Chrome */ -moz-border-radius:15px; /* Firefox */ -khtml-border-radius:15px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */ filter:progid:DXImageTransform.Microsoft.dropshadow(OffX=2, OffY=2, Color="gray", Positive="true"); /* IE */}
</style><style type="text/css">.CtxtMenu_MenuClose { position:absolute; cursor:pointer; display:inline-block; border:2px solid #AAA; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ font-family: "Courier New", Courier; font-size:24px; color:#F0F0F0}
.CtxtMenu_MenuClose span { display:block; background-color:#AAA; border:1.5px solid; border-radius:18px; -webkit-border-radius: 18px; /* Safari and Chrome */ -moz-border-radius: 18px; /* Firefox */ -khtml-border-radius: 18px; /* Konqueror */ line-height:0; padding:8px 0 6px /* may need to be browser-specific */}
.CtxtMenu_MenuClose:hover { color:white!important; border:2px solid #CCC!important}
.CtxtMenu_MenuClose:hover span { background-color:#CCC!important}
.CtxtMenu_MenuClose:hover:focus { outline:none}
</style><style type="text/css">.CtxtMenu_Menu { position:absolute; background-color:white; color:black; width:auto; padding:5px 0px; border:1px solid #CCCCCC; margin:0; cursor:default; font: menu; text-align:left; text-indent:0; text-transform:none; line-height:normal; letter-spacing:normal; word-spacing:normal; word-wrap:normal; white-space:nowrap; float:none; z-index:201; border-radius: 5px; /* Opera 10.5 and IE9 */ -webkit-border-radius: 5px; /* Safari and Chrome */ -moz-border-radius: 5px; /* Firefox */ -khtml-border-radius: 5px; /* Konqueror */ box-shadow:0px 10px 20px #808080; /* Opera 10.5 and IE9 */ -webkit-box-shadow:0px 10px 20px #808080; /* Safari 3 & Chrome */ -moz-box-shadow:0px 10px 20px #808080; /* Forefox 3.5 */ -khtml-box-shadow:0px 10px 20px #808080; /* Konqueror */}
.CtxtMenu_MenuItem { padding: 1px 2em; background:transparent;}
.CtxtMenu_MenuArrow { position:absolute; right:.5em; padding-top:.25em; color:#666666; font-family: null; font-size: .75em}
.CtxtMenu_MenuActive .CtxtMenu_MenuArrow {color:white}
.CtxtMenu_MenuArrow.CtxtMenu_RTL {left:.5em; right:auto}
.CtxtMenu_MenuCheck { position:absolute; left:.7em; font-family: null}
.CtxtMenu_MenuCheck.CtxtMenu_RTL { right:.7em; left:auto }
.CtxtMenu_MenuRadioCheck { position:absolute; left: .7em;}
.CtxtMenu_MenuRadioCheck.CtxtMenu_RTL { right: .7em; left:auto}
.CtxtMenu_MenuInputBox { padding-left: 1em; right:.5em; color:#666666; font-family: null;}
.CtxtMenu_MenuInputBox.CtxtMenu_RTL { left: .1em;}
.CtxtMenu_MenuComboBox { left:.1em; padding-bottom:.5em;}
.CtxtMenu_MenuSlider { left: .1em;}
.CtxtMenu_SliderValue { position:absolute; right:.1em; padding-top:.25em; color:#333333; font-size: .75em}
.CtxtMenu_SliderBar { outline: none; background: #d3d3d3}
.CtxtMenu_MenuLabel { padding: 1px 2em 3px 1.33em; font-style:italic}
.CtxtMenu_MenuRule { border-top: 1px solid #DDDDDD; margin: 4px 3px;}
.CtxtMenu_MenuDisabled { color:GrayText}
.CtxtMenu_MenuActive { background-color: #606872; color: white;}
.CtxtMenu_MenuDisabled:focus { background-color: #E8E8E8}
.CtxtMenu_MenuLabel:focus { background-color: #E8E8E8}
.CtxtMenu_ContextMenu:focus { outline:none}
.CtxtMenu_ContextMenu .CtxtMenu_MenuItem:focus { outline:none}
.CtxtMenu_SelectionMenu { position:relative; float:left; border-bottom: none; -webkit-box-shadow:none; -webkit-border-radius:0px; }
.CtxtMenu_SelectionItem { padding-right: 1em;}
.CtxtMenu_Selection { right: 40%; width:50%; }
.CtxtMenu_SelectionBox { padding: 0em; max-height:20em; max-width: none; background-color:#FFFFFF;}
.CtxtMenu_SelectionDivider { clear: both; border-top: 2px solid #000000;}
.CtxtMenu_Menu .CtxtMenu_MenuClose { top:-10px; left:-10px}
</style><style id="MJX-CHTML-styles">
mjx-container[jax="CHTML"] {
line-height: 0;
}
mjx-container [space="1"] {
margin-left: .111em;
}
mjx-container [space="2"] {
margin-left: .167em;
}
mjx-container [space="3"] {
margin-left: .222em;
}
mjx-container [space="4"] {
margin-left: .278em;
}
mjx-container [space="5"] {
margin-left: .333em;
}
mjx-container [rspace="1"] {
margin-right: .111em;
}
mjx-container [rspace="2"] {
margin-right: .167em;
}
mjx-container [rspace="3"] {
margin-right: .222em;
}
mjx-container [rspace="4"] {
margin-right: .278em;
}
mjx-container [rspace="5"] {
margin-right: .333em;
}
mjx-container [size="s"] {
font-size: 70.7%;
}
mjx-container [size="ss"] {
font-size: 50%;
}
mjx-container [size="Tn"] {
font-size: 60%;
}
mjx-container [size="sm"] {
font-size: 85%;
}
mjx-container [size="lg"] {
font-size: 120%;
}
mjx-container [size="Lg"] {
font-size: 144%;
}
mjx-container [size="LG"] {
font-size: 173%;
}
mjx-container [size="hg"] {
font-size: 207%;
}
mjx-container [size="HG"] {
font-size: 249%;
}
mjx-container [width="full"] {
width: 100%;
}
mjx-box {
display: inline-block;
}
mjx-block {
display: block;
}
mjx-itable {
display: inline-table;
}
mjx-row {
display: table-row;
}
mjx-row > * {
display: table-cell;
}
mjx-mtext {
display: inline-block;
}
mjx-mstyle {
display: inline-block;
}
mjx-merror {
display: inline-block;
color: red;
background-color: yellow;
}
mjx-mphantom {
visibility: hidden;
}
_::-webkit-full-page-media, _:future, :root mjx-container {
will-change: opacity;
}
mjx-assistive-mml {
position: absolute !important;
top: 0px;
left: 0px;
clip: rect(1px, 1px, 1px, 1px);
padding: 1px 0px 0px 0px !important;
border: 0px !important;
display: block !important;
width: auto !important;
overflow: hidden !important;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
mjx-assistive-mml[display="block"] {
width: 100% !important;
}
mjx-c::before {
display: block;
width: 0;
}
.MJX-TEX {
font-family: MJXZERO, MJXTEX;
}
.TEX-B {
font-family: MJXZERO, MJXTEX-B;
}
.TEX-I {
font-family: MJXZERO, MJXTEX-I;
}
.TEX-MI {
font-family: MJXZERO, MJXTEX-MI;
}
.TEX-BI {
font-family: MJXZERO, MJXTEX-BI;
}
.TEX-S1 {
font-family: MJXZERO, MJXTEX-S1;
}
.TEX-S2 {
font-family: MJXZERO, MJXTEX-S2;
}
.TEX-S3 {
font-family: MJXZERO, MJXTEX-S3;
}
.TEX-S4 {
font-family: MJXZERO, MJXTEX-S4;
}
.TEX-A {
font-family: MJXZERO, MJXTEX-A;
}
.TEX-C {
font-family: MJXZERO, MJXTEX-C;
}
.TEX-CB {
font-family: MJXZERO, MJXTEX-CB;
}
.TEX-FR {
font-family: MJXZERO, MJXTEX-FR;
}
.TEX-FRB {
font-family: MJXZERO, MJXTEX-FRB;
}
.TEX-SS {
font-family: MJXZERO, MJXTEX-SS;
}
.TEX-SSB {
font-family: MJXZERO, MJXTEX-SSB;
}
.TEX-SSI {
font-family: MJXZERO, MJXTEX-SSI;
}
.TEX-SC {
font-family: MJXZERO, MJXTEX-SC;
}
.TEX-T {
font-family: MJXZERO, MJXTEX-T;
}
.TEX-V {
font-family: MJXZERO, MJXTEX-V;
}
.TEX-VB {
font-family: MJXZERO, MJXTEX-VB;
}
mjx-stretchy-v mjx-c, mjx-stretchy-h mjx-c {
font-family: MJXZERO, MJXTEX-S1, MJXTEX-S4, MJXTEX, MJXTEX-A ! important;
}
@font-face /* 0 */ {
font-family: MJXZERO;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Zero.woff") format("woff");
}
@font-face /* 1 */ {
font-family: MJXTEX;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Regular.woff") format("woff");
}
@font-face /* 2 */ {
font-family: MJXTEX-B;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Bold.woff") format("woff");
}
@font-face /* 3 */ {
font-family: MJXTEX-I;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-Italic.woff") format("woff");
}
@font-face /* 4 */ {
font-family: MJXTEX-MI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Main-Italic.woff") format("woff");
}
@font-face /* 5 */ {
font-family: MJXTEX-BI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Math-BoldItalic.woff") format("woff");
}
@font-face /* 6 */ {
font-family: MJXTEX-S1;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size1-Regular.woff") format("woff");
}
@font-face /* 7 */ {
font-family: MJXTEX-S2;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size2-Regular.woff") format("woff");
}
@font-face /* 8 */ {
font-family: MJXTEX-S3;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size3-Regular.woff") format("woff");
}
@font-face /* 9 */ {
font-family: MJXTEX-S4;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Size4-Regular.woff") format("woff");
}
@font-face /* 10 */ {
font-family: MJXTEX-A;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_AMS-Regular.woff") format("woff");
}
@font-face /* 11 */ {
font-family: MJXTEX-C;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Regular.woff") format("woff");
}
@font-face /* 12 */ {
font-family: MJXTEX-CB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Calligraphic-Bold.woff") format("woff");
}
@font-face /* 13 */ {
font-family: MJXTEX-FR;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Regular.woff") format("woff");
}
@font-face /* 14 */ {
font-family: MJXTEX-FRB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Fraktur-Bold.woff") format("woff");
}
@font-face /* 15 */ {
font-family: MJXTEX-SS;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Regular.woff") format("woff");
}
@font-face /* 16 */ {
font-family: MJXTEX-SSB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Bold.woff") format("woff");
}
@font-face /* 17 */ {
font-family: MJXTEX-SSI;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_SansSerif-Italic.woff") format("woff");
}
@font-face /* 18 */ {
font-family: MJXTEX-SC;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Script-Regular.woff") format("woff");
}
@font-face /* 19 */ {
font-family: MJXTEX-T;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Typewriter-Regular.woff") format("woff");
}
@font-face /* 20 */ {
font-family: MJXTEX-V;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Regular.woff") format("woff");
}
@font-face /* 21 */ {
font-family: MJXTEX-VB;
src: url("https://cdn.jsdelivr.net/npm/mathjax@3/es5/output/chtml/fonts/woff-v2/MathJax_Vector-Bold.woff") format("woff");
}
</style><script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fsrc/examples/mxnet/resnet50_neuroncore_groups.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/src/examples/mxnet/resnet50_neuroncore_groups.ipynb" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/src/examples/mxnet/resnet50_neuroncore_groups.ipynb.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.ipynb</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-model-for-Neuron">
Compile model for Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Run-inference-using-NeuronCore-Groups">
Run inference using NeuronCore Groups
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Troubleshooting">
Troubleshooting
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Introduction:">
Introduction:
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Compile-model-for-Neuron">
Compile model for Neuron
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#Run-inference-using-NeuronCore-Groups">
Run inference using NeuronCore Groups
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#Troubleshooting">
Troubleshooting
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<style>
/* CSS for nbsphinx extension */
/* remove conflicting styling from Sphinx themes */
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt *,
div.nbinput.container div.input_area pre,
div.nboutput.container div.output_area pre,
div.nbinput.container div.input_area .highlight,
div.nboutput.container div.output_area .highlight {
border: none;
padding: 0;
margin: 0;
box-shadow: none;
}
div.nbinput.container > div[class*=highlight],
div.nboutput.container > div[class*=highlight] {
margin: 0;
}
div.nbinput.container div.prompt *,
div.nboutput.container div.prompt * {
background: none;
}
div.nboutput.container div.output_area .highlight,
div.nboutput.container div.output_area pre {
background: unset;
}
div.nboutput.container div.output_area div.highlight {
color: unset; /* override Pygments text color */
}
/* avoid gaps between output lines */
div.nboutput.container div[class*=highlight] pre {
line-height: normal;
}
/* input/output containers */
div.nbinput.container,
div.nboutput.container {
display: -webkit-flex;
display: flex;
align-items: flex-start;
margin: 0;
width: 100%;
}
@media (max-width: 540px) {
div.nbinput.container,
div.nboutput.container {
flex-direction: column;
}
}
/* input container */
div.nbinput.container {
padding-top: 5px;
}
/* last container */
div.nblast.container {
padding-bottom: 5px;
}
/* input prompt */
div.nbinput.container div.prompt pre {
color: #307FC1;
}
/* output prompt */
div.nboutput.container div.prompt pre {
color: #BF5B3D;
}
/* all prompts */
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: 4.5ex;
padding-top: 5px;
position: relative;
user-select: none;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: absolute;
right: 0;
margin-right: 0.3ex;
}
@media (max-width: 540px) {
div.nbinput.container div.prompt,
div.nboutput.container div.prompt {
width: unset;
text-align: left;
padding: 0.4em;
}
div.nboutput.container div.prompt.empty {
padding: 0;
}
div.nbinput.container div.prompt > div,
div.nboutput.container div.prompt > div {
position: unset;
}
}
/* disable scrollbars on prompts */
div.nbinput.container div.prompt pre,
div.nboutput.container div.prompt pre {
overflow: hidden;
}
/* input/output area */
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
-webkit-flex: 1;
flex: 1;
overflow: auto;
}
@media (max-width: 540px) {
div.nbinput.container div.input_area,
div.nboutput.container div.output_area {
width: 100%;
}
}
/* input area */
div.nbinput.container div.input_area {
border: 1px solid #e0e0e0;
border-radius: 2px;
/*background: #f5f5f5;*/
}
/* override MathJax center alignment in output cells */
div.nboutput.container div[class*=MathJax] {
text-align: left !important;
}
/* override sphinx.ext.imgmath center alignment in output cells */
div.nboutput.container div.math p {
text-align: left;
}
/* standard error */
div.nboutput.container div.output_area.stderr {
background: #fdd;
}
/* ANSI colors */
.ansi-black-fg { color: #3E424D; }
.ansi-black-bg { background-color: #3E424D; }
.ansi-black-intense-fg { color: #282C36; }
.ansi-black-intense-bg { background-color: #282C36; }
.ansi-red-fg { color: #E75C58; }
.ansi-red-bg { background-color: #E75C58; }
.ansi-red-intense-fg { color: #B22B31; }
.ansi-red-intense-bg { background-color: #B22B31; }
.ansi-green-fg { color: #00A250; }
.ansi-green-bg { background-color: #00A250; }
.ansi-green-intense-fg { color: #007427; }
.ansi-green-intense-bg { background-color: #007427; }
.ansi-yellow-fg { color: #DDB62B; }
.ansi-yellow-bg { background-color: #DDB62B; }
.ansi-yellow-intense-fg { color: #B27D12; }
.ansi-yellow-intense-bg { background-color: #B27D12; }
.ansi-blue-fg { color: #208FFB; }
.ansi-blue-bg { background-color: #208FFB; }
.ansi-blue-intense-fg { color: #0065CA; }
.ansi-blue-intense-bg { background-color: #0065CA; }
.ansi-magenta-fg { color: #D160C4; }
.ansi-magenta-bg { background-color: #D160C4; }
.ansi-magenta-intense-fg { color: #A03196; }
.ansi-magenta-intense-bg { background-color: #A03196; }
.ansi-cyan-fg { color: #60C6C8; }
.ansi-cyan-bg { background-color: #60C6C8; }
.ansi-cyan-intense-fg { color: #258F8F; }
.ansi-cyan-intense-bg { background-color: #258F8F; }
.ansi-white-fg { color: #C5C1B4; }
.ansi-white-bg { background-color: #C5C1B4; }
.ansi-white-intense-fg { color: #A1A6B2; }
.ansi-white-intense-bg { background-color: #A1A6B2; }
.ansi-default-inverse-fg { color: #FFFFFF; }
.ansi-default-inverse-bg { background-color: #000000; }
.ansi-bold { font-weight: bold; }
.ansi-underline { text-decoration: underline; }
div.nbinput.container div.input_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight] > pre,
div.nboutput.container div.output_area div[class*=highlight].math,
div.nboutput.container div.output_area.rendered_html,
div.nboutput.container div.output_area > div.output_javascript,
div.nboutput.container div.output_area:not(.rendered_html) > img{
padding: 5px;
margin: 0;
}
/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */
div.nbinput.container div.input_area > div[class^='highlight'],
div.nboutput.container div.output_area > div[class^='highlight']{
overflow-y: hidden;
}
/* hide copybtn icon on prompts (needed for 'sphinx_copybutton') */
.prompt .copybtn {
display: none;
}
/* Some additional styling taken form the Jupyter notebook CSS */
.jp-RenderedHTMLCommon table,
div.rendered_html table {
border: none;
border-collapse: collapse;
border-spacing: 0;
color: black;
font-size: 12px;
table-layout: fixed;
}
.jp-RenderedHTMLCommon thead,
div.rendered_html thead {
border-bottom: 1px solid black;
vertical-align: bottom;
}
.jp-RenderedHTMLCommon tr,
.jp-RenderedHTMLCommon th,
.jp-RenderedHTMLCommon td,
div.rendered_html tr,
div.rendered_html th,
div.rendered_html td {
text-align: right;
vertical-align: middle;
padding: 0.5em 0.5em;
line-height: normal;
white-space: normal;
max-width: none;
border: none;
}
.jp-RenderedHTMLCommon th,
div.rendered_html th {
font-weight: bold;
}
.jp-RenderedHTMLCommon tbody tr:nth-child(odd),
div.rendered_html tbody tr:nth-child(odd) {
background: #f5f5f5;
}
.jp-RenderedHTMLCommon tbody tr:hover,
div.rendered_html tbody tr:hover {
background: rgba(66, 165, 245, 0.2);
}
</style>
<div class="section" id="Neuron-Apache-MXNet-(Incubating)---Configurations-for-NeuronCore-Groups-Using-Resnet50">
<h1>Neuron Apache MXNet (Incubating) - Configurations for NeuronCore Groups Using Resnet50<a class="headerlink" href="#Neuron-Apache-MXNet-(Incubating)---Configurations-for-NeuronCore-Groups-Using-Resnet50" title="Permalink to this headline">#</a></h1>
<div class="section" id="Introduction:">
<h2>Introduction:<a class="headerlink" href="#Introduction:" title="Permalink to this headline">#</a></h2>
<p>In this tutorial we will compile and deploy Resnet-50 model in parallel using the concept of NeuronCore Groups on an Inf1 instance. This Jupyter notebook should be run on an instance which is inf1.6xlarge or larger. For simplicity we will run this tutorial on inf1.6xlarge but in real life scenario the compilation should be done on a compute instance and the deployment on inf1 instance to save costs.</p>
<p>Set environment variable NEURON_RT_NUM_CORES to the total number of Neuron cores that will be utilized. The consecutive NeuronCore groups will be created by Neuron Runtime and place the models to the cores according to the compiled size.</p>
<p>Note that in order to map a model to a group, the model must be compiled to fit within the group size. To limit the number of NeuronCores during compilation, use compiler_args dictionary with field “–neuroncore-pipeline-cores“ set to the group size. For exmaple, if NEURON_RT_NUM_CORES=4 and two models compiled with “–neuroncore-pipeline-cores=3“ and “–neuroncore-pipeline-cores=1“ were loaded, the first model would occupy NC0-2 and the second model would occupy NC3.</p>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>compile_args = {'--neuroncore-pipeline-cores' : 2}
sym, args, auxs = neuron.compile(sym, args, auxs, inputs, **compile_args)
</pre></div>
</div>
<p>In this tutorial we provide two main sections:</p>
<ol class="arabic simple">
<li><p>Compile the Resnet50 model for Neuron</p></li>
<li><p>Run inference using NeuronCore Groups</p></li>
</ol>
<p>Please use environment <code class="docutils literal notranslate"><span class="pre">conda_aws_neuron_mxnet_p36</span></code>.</p>
</div>
<div class="section" id="Compile-model-for-Neuron">
<h2>Compile model for Neuron<a class="headerlink" href="#Compile-model-for-Neuron" title="Permalink to this headline">#</a></h2>
<p>Model must be compiled to Inferentia target before it can be used on Inferentia. In the following we will compile the the flag, –neuroncore-pipeline-cores set to 2 and run it. The files resnet-50_compiled-0000.params and resnet-50_compiled-symbol.json will be created in local directory</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">packaging</span> <span class="kn">import</span> <span class="n">version</span>
<span class="kn">import</span> <span class="nn">mxnet</span> <span class="k">as</span> <span class="nn">mx</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">mx_neuron</span> <span class="k">as</span> <span class="nn">neuron</span>
<span class="n">path</span><span class="o">=</span><span class="s1">'http://data.mxnet.io/models/imagenet/'</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-0000.params'</span><span class="p">)</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'resnet/50-layers/resnet-50-symbol.json'</span><span class="p">)</span>
<span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">load_checkpoint</span><span class="p">(</span><span class="s1">'resnet-50'</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="c1"># Compile for Inferentia using Neuron, fit to NeuronCore group size of 2</span>
<span class="n">inputs</span> <span class="o">=</span> <span class="p">{</span> <span class="s2">"data"</span> <span class="p">:</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">ones</span><span class="p">([</span><span class="mi">1</span><span class="p">,</span><span class="mi">3</span><span class="p">,</span><span class="mi">224</span><span class="p">,</span><span class="mi">224</span><span class="p">],</span> <span class="n">name</span><span class="o">=</span><span class="s1">'data'</span><span class="p">,</span> <span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span> <span class="p">}</span>
<span class="n">compile_args</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'--neuroncore-pipeline-cores'</span> <span class="p">:</span> <span class="mi">2</span><span class="p">}</span>
<span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span> <span class="o">=</span> <span class="n">neuron</span><span class="o">.</span><span class="n">compile</span><span class="p">(</span><span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="o">**</span><span class="n">compile_args</span><span class="p">)</span>
<span class="c1">#save compiled model</span>
<span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">save_checkpoint</span><span class="p">(</span><span class="s2">"resnet-50_compiled"</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span><span class="p">)</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="Run-inference-using-NeuronCore-Groups">
<h2>Run inference using NeuronCore Groups<a class="headerlink" href="#Run-inference-using-NeuronCore-Groups" title="Permalink to this headline">#</a></h2>
<p>Within the framework, the model can be mapped to specific cores using <code class="docutils literal notranslate"><span class="pre">ctx=mx.neuron(N)</span></code> context where N specifies the index of the Neuron core to deploy. For more information, see <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/appnotes/perf/flex-eg.html">https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/appnotes/perf/flex-eg.html</a> .</p>
<div class="nbinput nblast docutils container">
<div class="prompt highlight-none notranslate"><div class="highlight"><pre><span></span>[ ]:
</pre></div>
</div>
<div class="input_area highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">warnings</span>
<span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="n">path</span><span class="o">+</span><span class="s1">'synset.txt'</span><span class="p">)</span>
<span class="n">fname</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">test_utils</span><span class="o">.</span><span class="n">download</span><span class="p">(</span><span class="s1">'https://raw.githubusercontent.com/awslabs/mxnet-model-server/master/docs/images/kitten_small.jpg?raw=true'</span><span class="p">)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">imread</span><span class="p">(</span><span class="n">fname</span><span class="p">)</span> <span class="c1"># convert into format (batch, RGB, width, height)</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">image</span><span class="o">.</span><span class="n">imresize</span><span class="p">(</span><span class="n">img</span><span class="p">,</span> <span class="mi">224</span><span class="p">,</span> <span class="mi">224</span><span class="p">)</span> <span class="c1"># resize</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">transpose</span><span class="p">((</span><span class="mi">2</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span> <span class="c1"># Channel first</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">axis</span><span class="o">=</span><span class="mi">0</span><span class="p">)</span> <span class="c1"># batchify</span>
<span class="n">img</span> <span class="o">=</span> <span class="n">img</span><span class="o">.</span><span class="n">astype</span><span class="p">(</span><span class="n">dtype</span><span class="o">=</span><span class="s1">'float32'</span><span class="p">)</span>
<span class="n">sym</span><span class="p">,</span> <span class="n">args</span><span class="p">,</span> <span class="n">aux</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">model</span><span class="o">.</span><span class="n">load_checkpoint</span><span class="p">(</span><span class="s1">'resnet-50_compiled'</span><span class="p">,</span> <span class="mi">0</span><span class="p">)</span>
<span class="n">softmax</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">nd</span><span class="o">.</span><span class="n">random_normal</span><span class="p">(</span><span class="n">shape</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,))</span>
<span class="n">args</span><span class="p">[</span><span class="s1">'softmax_label'</span><span class="p">]</span> <span class="o">=</span> <span class="n">softmax</span>
<span class="n">args</span><span class="p">[</span><span class="s1">'data'</span><span class="p">]</span> <span class="o">=</span> <span class="n">img</span>
<span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s2">"NEURON_RT_NUM_CORES"</span><span class="p">]</span> <span class="o">=</span> <span class="s1">'4'</span>
<span class="c1"># Inferentia context - group index 1 (size 2) would skip NC0 and place the</span>
<span class="c1"># compiled model onto NC1,2</span>
<span class="n">ctx</span> <span class="o">=</span> <span class="n">mx</span><span class="o">.</span><span class="n">neuron</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span>
<span class="n">exe</span> <span class="o">=</span> <span class="n">sym</span><span class="o">.</span><span class="n">bind</span><span class="p">(</span><span class="n">ctx</span><span class="o">=</span><span class="n">ctx</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="n">args</span><span class="p">,</span> <span class="n">aux_states</span><span class="o">=</span><span class="n">aux</span><span class="p">,</span> <span class="n">grad_req</span><span class="o">=</span><span class="s1">'null'</span><span class="p">)</span>
<span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s1">'synset.txt'</span><span class="p">,</span> <span class="s1">'r'</span><span class="p">)</span> <span class="k">as</span> <span class="n">f</span><span class="p">:</span>
<span class="n">labels</span> <span class="o">=</span> <span class="p">[</span><span class="n">l</span><span class="o">.</span><span class="n">rstrip</span><span class="p">()</span> <span class="k">for</span> <span class="n">l</span> <span class="ow">in</span> <span class="n">f</span><span class="p">]</span>
<span class="n">exe</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="n">data</span><span class="o">=</span><span class="n">img</span><span class="p">)</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">exe</span><span class="o">.</span><span class="n">outputs</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">asnumpy</span><span class="p">()</span><span class="c1"># print the top-5</span>
<span class="n">prob</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">squeeze</span><span class="p">(</span><span class="n">prob</span><span class="p">)</span>
<span class="n">a</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">argsort</span><span class="p">(</span><span class="n">prob</span><span class="p">)[::</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="n">a</span><span class="p">[</span><span class="mi">0</span><span class="p">:</span><span class="mi">5</span><span class="p">]:</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'probability=</span><span class="si">%f</span><span class="s1">, class=</span><span class="si">%s</span><span class="s1">'</span> <span class="o">%</span><span class="p">(</span><span class="n">prob</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">labels</span><span class="p">[</span><span class="n">i</span><span class="p">]))</span>
</pre></div>
</div>
</div>
<p>You can experiment with different Neuron core group combinations and different models.</p>
<div class="section" id="Troubleshooting">
<h3>Troubleshooting<a class="headerlink" href="#Troubleshooting" title="Permalink to this headline">#</a></h3>
<p>If not enough NeuronCores are provided, an error message will be displayed:</p>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>mxnet.base.MXNetError: [04:01:39] src/operator/subgraph/neuron/./neuron_util.h:541: Check failed: rsp.status().code() == 0: Failed load model with Neuron-RTD Error. Neuron-RTD Status Code: 9, details: ""
</pre></div>
</div>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:55.401Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/transformers-neuronx/developer-guide.rst.txt
|
```
.. _tn_developer_guide:
Transformers Neuron Developer Guide (``transformers-neuronx``)
==============================================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/transformers-neuronx/transformers-neuronx-developer-guide
.. include:: /libraries/transformers-neuronx/developer-guide.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tn_developer_guide:
Transformers Neuron Developer Guide (``transformers-neuronx``)
==============================================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/transformers-neuronx/transformers-neuronx-developer-guide
.. include:: /libraries/transformers-neuronx/developer-guide.txt</pre></body></html>
|
2023-09-29T20:54:55.862Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/transformers-neuronx/transformers-neuronx-tutorials.rst.txt
|
```
.. _transformers_neuronx_tutorials:
Transformers Neuron Tutorials (``transformers-neuronx``)
========================================================
.. toctree::
:maxdepth: 1
:hidden:
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb>
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb>
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb>
Hugging Face facebook/opt-66b autoregressive sampling on Inf2 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb>
.. include:: /libraries/transformers-neuronx/transformers-neuronx-tutorials.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _transformers_neuronx_tutorials:
Transformers Neuron Tutorials (``transformers-neuronx``)
========================================================
.. toctree::
:maxdepth: 1
:hidden:
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb>
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb>
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb>
Hugging Face facebook/opt-66b autoregressive sampling on Inf2 <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb>
.. include:: /libraries/transformers-neuronx/transformers-neuronx-tutorials.txt
</pre></body></html>
|
2023-09-29T20:54:55.951Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/index.rst.txt
|
```
.. _neuronx-distributed-index:
Neuron Distributed
===================
Neuron Distributed is a package for supporting different distributed
training/inference mechanism for Neuron devices. It would provide xla
friendly implementations of some of the more popular distributed
training/inference techniques. As the size of the model scales, fitting
these models on a single device becomes impossible and hence we have to
make use of model sharding techniques to partition the model across
multiple devices. As part of this library, we enable support for Tensor
Parallel sharding technique with other distributed library supported to be
added in future.
.. toctree::
:maxdepth: 1
:hidden:
Setup </libraries/neuronx-distributed/setup/index>
App Notes </libraries/neuronx-distributed/app_notes>
API Reference Guide </libraries/neuronx-distributed/api-reference-guide>
Developer Guide </libraries/neuronx-distributed/developer-guide>
Tutorials </libraries/neuronx-distributed/tutorials/index>
Misc </libraries/neuronx-distributed/neuronx-distributed-misc>
.. include:: /libraries/neuronx-distributed/neuronx-distributed.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-distributed-index:
Neuron Distributed
===================
Neuron Distributed is a package for supporting different distributed
training/inference mechanism for Neuron devices. It would provide xla
friendly implementations of some of the more popular distributed
training/inference techniques. As the size of the model scales, fitting
these models on a single device becomes impossible and hence we have to
make use of model sharding techniques to partition the model across
multiple devices. As part of this library, we enable support for Tensor
Parallel sharding technique with other distributed library supported to be
added in future.
.. toctree::
:maxdepth: 1
:hidden:
Setup </libraries/neuronx-distributed/setup/index>
App Notes </libraries/neuronx-distributed/app_notes>
API Reference Guide </libraries/neuronx-distributed/api-reference-guide>
Developer Guide </libraries/neuronx-distributed/developer-guide>
Tutorials </libraries/neuronx-distributed/tutorials/index>
Misc </libraries/neuronx-distributed/neuronx-distributed-misc>
.. include:: /libraries/neuronx-distributed/neuronx-distributed.txt</pre></body></html>
|
2023-09-29T20:54:56.040Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/developer-guide.rst.txt
|
```
.. _neuronx_distributed_developer_guide
Developer Guide (``neuronx-distributed`` )
==========================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/neuronx-distributed/tp_developer_guide
.. include:: /libraries/neuronx-distributed/developer-guide.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx_distributed_developer_guide
Developer Guide (``neuronx-distributed`` )
==========================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/neuronx-distributed/tp_developer_guide
.. include:: /libraries/neuronx-distributed/developer-guide.txt</pre></body></html>
|
2023-09-29T20:54:56.145Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tp_developer_guide.rst.txt
|
```
.. _tp_developer_guide:
Developer guide for Tensor Parallelism (``neuronx-distributed`` )
=================================================================
Training
^^^^^^^^
For training models with tensor-parallelism, one would have to make few
changes to their model/training script. Below we walk through the
different changes one would have to make to shard the models across
devices.
Creating DataLoader:
''''''''''''''''''''
When we shard the model across devices using tensor parallelism, all the
tensor parallel workers are operating on the same batch of data. Hence,
to ensure that each tensor parallel worker is getting the same data, we
make use of ``DistributedSampler`` as shown in the snippet below
.. code:: ipython3
def create_pretraining_dataset(
input_file, max_pred_length, mini_batch_size, worker_init
):
train_data = pretraining_dataset(
input_file=input_file, max_pred_length=max_pred_length
)
# To distribute the data across different workers in the world,
# we use the DistributedSampler. The num_replicas should be equal
# to the data_parallel_world_size. Note: data_parallel_rank=0 can have
# multiple tensor parallel ranks and each of these should get the same
# data.
train_sampler = DistributedSampler(
train_data,
num_replicas=parallel_state.get_data_parallel_world_size(),
rank=parallel_state.get_data_parallel_rank(),
)
train_dataloader = DataLoader(
train_data,
sampler=train_sampler,
batch_size=mini_batch_size,
num_workers=0,
worker_init_fn=worker_init,
drop_last=True,
pin_memory=True,
)
return train_dataloader
Creating Model:
'''''''''''''''
One can create models by replacing the large linear layers with
``ColumnParallel`` and ``RowParallel`` Linear layers. In case of
transformers, we have a good structure where the Attention block usually
have linear projections for QKV and this is followed by a fully
connected layer. Let’s take a look at the example for the BERT model. We
make the attention module of BERT model to use tensor parallel layers,
thereby adding the ability to shard the model across devices.
.. code:: ipython3
class ParallelSelfAttention(transformers.models.bert.modeling_bert.BertSelfAttention):
def __init__(self, config, position_embedding_type=None):
super().__init__(config, position_embedding_type)
self.query = ColumnParallelLinear(config.hidden_size,
self.all_head_size,
gather_output=False)
self.key = ColumnParallelLinear(config.hidden_size,
self.all_head_size,
gather_output=False)
self.value = ColumnParallelLinear(config.hidden_size,
self.all_head_size,
gather_output=False)
# Since we shard the number of attention heads across tensor parallel
# ranks, each rank would have a subset of heads, hence, we update
# the num_attention_heads here.
tp_size = parallel_state.get_tensor_parallel_size()
self.num_attention_heads = self.num_attention_heads // tp_size
self.all_head_size = self.all_head_size // tp_size
As seen we just had to swap out the linear layers with ColumnParallel
Linear layers and the rest of the forward method of the attention layer
can work as is. Note: In the above ColumnParallelLinear layer we are not
gathering output from each rank, in other words, each ranks is working
on its own shard. We can make gather_output=True and that would gather
output and you would get a full dim output. However, gathering output
from all ranks would introduce an all-gather operation which can be
expensive depending on the size of the tensor. In the case of attention
module, we know that the SelfAttention block is followed by MLP block.
Hence, we replace the linear layer there with a RowParallelLinear as
shown below:
.. code:: ipython3
class ParallelSelfOutput(transformers.models.bert.modeling_bert.BertSelfOutput):
def __init__(self, config):
super().__init__(config)
self.dense = RowParallelLinear(config.hidden_size,
config.hidden_size,
input_is_parallel=True)
As seen we just had to replace the dense layer here, and pass the
``input_is_parallel`` argument. This way, the ``RowParallelLinear``
should operator on partitions and get a collective result.
Making just the above two changes can help you partition good chunk of
your model across multiple workers, thereby allowing models of larger
size to be trained on a single instance. Note: Majority of the
parameters of a transformer model are in these linear layers and hence
partitioning these layers can help you scale.
Final Training script:
''''''''''''''''''''''
Once the dataloader and model changes are done, we are ready to build
the training script. Good news, you can use the same training loop as
before for data-parallel training, and would need just the minor tweaks
to get it all started.
.. code:: ipython3
from neuronx_distributed.parallel_layers import parallel_state, clip_grad_norm
neuronx_distributed.parallel_state.initialize_model_parallel(tensor_model_parallel_size=2)
dataloader = create_pretraining_dataset(
input_file, max_pred_length, mini_batch_size, worker_init)
model = YourNewlyBuiltParallelModel(config)
# We have to move the model to device using this API, because when
# we move model to device using .to(device), the model parameter's
# attributes aren't preserved. This causes some of the tensor parallel
# attributes to be lost. Hence, this API takes care of preserving the
# tensor parallel attributes.
parallel_layers.move_model_to_device(model, device)
for inputs, labels in dataloader:
output = model(*inputs)
loss = loss_fn(output, labels)
loss.backward()
# Here we use clip_grad_norm from neuronx_distributed as that
# can handle tensor parallel ranks
clip_grad_norm(model.parameters(), max_norm)
# For the optimzer step, we have to pass the data_parallel group
xm.optimizer_step(
optimzer,
groups=parallel_state.get_data_parallel_group(as_list=True)
)
optimizer.zero_grad()
scheduler.step()
Few things to take note of in the above code snippet: 1. We are
initializing the model parallel with tensor parallel size of 2. This
will shard the model across 2 devices. 2. We use the
``move_model_to_device`` API to move model to device. This is equivalent
to doing ``model.to(device)``. We need to explicity call this API since
some of the tensor-parallel attributes do not get copied over when we
move the model to device using ``model.to(device)``. 3. We are calling
the ``clip_grad_norm`` from ``parallel_layers``. This clip_grad_norm
should take care of accumulating the max_norm from the tensor_parallel
ranks and producing the correct output. 4. We pass the
``data_parallel_group`` to the ``optimizer_step``. If we don’t pass the
group, default would be all the workers in the world.
Saving Model:
'''''''''''''
Once training is done, we want to save the model. This can be done
easily by calling the save api from
``neuronx_distributed.parallel_layers`` . Here is an example:
.. code:: ipython3
neuronx_distributed.parallel_layers.save({
'epoch': epoch,
'model': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
Note the ``model`` key used here, we need to provide the same key during
model load.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tp_developer_guide:
Developer guide for Tensor Parallelism (``neuronx-distributed`` )
=================================================================
Training
^^^^^^^^
For training models with tensor-parallelism, one would have to make few
changes to their model/training script. Below we walk through the
different changes one would have to make to shard the models across
devices.
Creating DataLoader:
''''''''''''''''''''
When we shard the model across devices using tensor parallelism, all the
tensor parallel workers are operating on the same batch of data. Hence,
to ensure that each tensor parallel worker is getting the same data, we
make use of ``DistributedSampler`` as shown in the snippet below
.. code:: ipython3
def create_pretraining_dataset(
input_file, max_pred_length, mini_batch_size, worker_init
):
train_data = pretraining_dataset(
input_file=input_file, max_pred_length=max_pred_length
)
# To distribute the data across different workers in the world,
# we use the DistributedSampler. The num_replicas should be equal
# to the data_parallel_world_size. Note: data_parallel_rank=0 can have
# multiple tensor parallel ranks and each of these should get the same
# data.
train_sampler = DistributedSampler(
train_data,
num_replicas=parallel_state.get_data_parallel_world_size(),
rank=parallel_state.get_data_parallel_rank(),
)
train_dataloader = DataLoader(
train_data,
sampler=train_sampler,
batch_size=mini_batch_size,
num_workers=0,
worker_init_fn=worker_init,
drop_last=True,
pin_memory=True,
)
return train_dataloader
Creating Model:
'''''''''''''''
One can create models by replacing the large linear layers with
``ColumnParallel`` and ``RowParallel`` Linear layers. In case of
transformers, we have a good structure where the Attention block usually
have linear projections for QKV and this is followed by a fully
connected layer. Let’s take a look at the example for the BERT model. We
make the attention module of BERT model to use tensor parallel layers,
thereby adding the ability to shard the model across devices.
.. code:: ipython3
class ParallelSelfAttention(transformers.models.bert.modeling_bert.BertSelfAttention):
def __init__(self, config, position_embedding_type=None):
super().__init__(config, position_embedding_type)
self.query = ColumnParallelLinear(config.hidden_size,
self.all_head_size,
gather_output=False)
self.key = ColumnParallelLinear(config.hidden_size,
self.all_head_size,
gather_output=False)
self.value = ColumnParallelLinear(config.hidden_size,
self.all_head_size,
gather_output=False)
# Since we shard the number of attention heads across tensor parallel
# ranks, each rank would have a subset of heads, hence, we update
# the num_attention_heads here.
tp_size = parallel_state.get_tensor_parallel_size()
self.num_attention_heads = self.num_attention_heads // tp_size
self.all_head_size = self.all_head_size // tp_size
As seen we just had to swap out the linear layers with ColumnParallel
Linear layers and the rest of the forward method of the attention layer
can work as is. Note: In the above ColumnParallelLinear layer we are not
gathering output from each rank, in other words, each ranks is working
on its own shard. We can make gather_output=True and that would gather
output and you would get a full dim output. However, gathering output
from all ranks would introduce an all-gather operation which can be
expensive depending on the size of the tensor. In the case of attention
module, we know that the SelfAttention block is followed by MLP block.
Hence, we replace the linear layer there with a RowParallelLinear as
shown below:
.. code:: ipython3
class ParallelSelfOutput(transformers.models.bert.modeling_bert.BertSelfOutput):
def __init__(self, config):
super().__init__(config)
self.dense = RowParallelLinear(config.hidden_size,
config.hidden_size,
input_is_parallel=True)
As seen we just had to replace the dense layer here, and pass the
``input_is_parallel`` argument. This way, the ``RowParallelLinear``
should operator on partitions and get a collective result.
Making just the above two changes can help you partition good chunk of
your model across multiple workers, thereby allowing models of larger
size to be trained on a single instance. Note: Majority of the
parameters of a transformer model are in these linear layers and hence
partitioning these layers can help you scale.
Final Training script:
''''''''''''''''''''''
Once the dataloader and model changes are done, we are ready to build
the training script. Good news, you can use the same training loop as
before for data-parallel training, and would need just the minor tweaks
to get it all started.
.. code:: ipython3
from neuronx_distributed.parallel_layers import parallel_state, clip_grad_norm
neuronx_distributed.parallel_state.initialize_model_parallel(tensor_model_parallel_size=2)
dataloader = create_pretraining_dataset(
input_file, max_pred_length, mini_batch_size, worker_init)
model = YourNewlyBuiltParallelModel(config)
# We have to move the model to device using this API, because when
# we move model to device using .to(device), the model parameter's
# attributes aren't preserved. This causes some of the tensor parallel
# attributes to be lost. Hence, this API takes care of preserving the
# tensor parallel attributes.
parallel_layers.move_model_to_device(model, device)
for inputs, labels in dataloader:
output = model(*inputs)
loss = loss_fn(output, labels)
loss.backward()
# Here we use clip_grad_norm from neuronx_distributed as that
# can handle tensor parallel ranks
clip_grad_norm(model.parameters(), max_norm)
# For the optimzer step, we have to pass the data_parallel group
xm.optimizer_step(
optimzer,
groups=parallel_state.get_data_parallel_group(as_list=True)
)
optimizer.zero_grad()
scheduler.step()
Few things to take note of in the above code snippet: 1. We are
initializing the model parallel with tensor parallel size of 2. This
will shard the model across 2 devices. 2. We use the
``move_model_to_device`` API to move model to device. This is equivalent
to doing ``model.to(device)``. We need to explicity call this API since
some of the tensor-parallel attributes do not get copied over when we
move the model to device using ``model.to(device)``. 3. We are calling
the ``clip_grad_norm`` from ``parallel_layers``. This clip_grad_norm
should take care of accumulating the max_norm from the tensor_parallel
ranks and producing the correct output. 4. We pass the
``data_parallel_group`` to the ``optimizer_step``. If we don’t pass the
group, default would be all the workers in the world.
Saving Model:
'''''''''''''
Once training is done, we want to save the model. This can be done
easily by calling the save api from
``neuronx_distributed.parallel_layers`` . Here is an example:
.. code:: ipython3
neuronx_distributed.parallel_layers.save({
'epoch': epoch,
'model': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
Note the ``model`` key used here, we need to provide the same key during
model load.</pre></body></html>
|
2023-09-29T20:54:56.190Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tutorials/index.rst.txt
|
```
.. _tp_tutorials:
Tutorials for Neuron Distributed (``neuronx-distributed`` )
============================================================
.. toctree::
:maxdepth: 1
:hidden:
Training using Tensor Parallelism </libraries/neuronx-distributed/tutorials/training>
Training GPT-NeoX 6.9B using TP and ZeRO-1 </libraries/neuronx-distributed/tutorials/training-gpt-neox>
Training GPT-NeoX 20B using TP and ZeRO-1 </libraries/neuronx-distributed/tutorials/training-gpt-neox-20b>
Training Llama2 7B using TP and ZeRO-1 </libraries/neuronx-distributed/tutorials/training-llama2-7b>
/src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.ipynb
.. toctree::
:maxdepth: 1
:hidden:
Inference using Tensor Parallelism </libraries/neuronx-distributed/tutorials/inference>
.. include:: /libraries/neuronx-distributed/tutorials/neuronx_distributed_tutorials.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tp_tutorials:
Tutorials for Neuron Distributed (``neuronx-distributed`` )
============================================================
.. toctree::
:maxdepth: 1
:hidden:
Training using Tensor Parallelism </libraries/neuronx-distributed/tutorials/training>
Training GPT-NeoX 6.9B using TP and ZeRO-1 </libraries/neuronx-distributed/tutorials/training-gpt-neox>
Training GPT-NeoX 20B using TP and ZeRO-1 </libraries/neuronx-distributed/tutorials/training-gpt-neox-20b>
Training Llama2 7B using TP and ZeRO-1 </libraries/neuronx-distributed/tutorials/training-llama2-7b>
/src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.ipynb
.. toctree::
:maxdepth: 1
:hidden:
Inference using Tensor Parallelism </libraries/neuronx-distributed/tutorials/inference>
.. include:: /libraries/neuronx-distributed/tutorials/neuronx_distributed_tutorials.txt
</pre></body></html>
|
2023-09-29T20:54:56.227Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/transformers-neuronx/transformers-neuronx-misc.rst.txt
|
```
.. _transformers-neuronx-misc:
Misc (``transformers-neuronx``)
===============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/torch/transformers-neuronx/index
.. include:: /libraries/transformers-neuronx/transformers-neuronx-misc.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _transformers-neuronx-misc:
Misc (``transformers-neuronx``)
===============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/torch/transformers-neuronx/index
.. include:: /libraries/transformers-neuronx/transformers-neuronx-misc.txt
</pre></body></html>
|
2023-09-29T20:54:56.327Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/api_guide.rst.txt
|
```
.. _api_guide:
API Reference Guide (``neuronx-distributed`` )
======================================================================
Neuronx-Distributed is XLA based library for distributed training and inference.
As part of this library, currently we support 2D parallelism: Tensor-Parallelism
and DataParallelism. We also support Zero1 optimizer to shard the optimizer weights.
To support tensor-parallelism on Neuron, we adopted the Apex Library
built for CUDA devices. We modified the implementations to work with
XLA. This document enlist the different APIs and modules provided by the library
Parallel Model State:
^^^^^^^^^^^^^^^^^^^^^
Initialize Model Parallelism:
'''''''''''''''''''''''''''''
::
def neuronx_distributed.parallel_state.initialize_model_parallel(
tensor_model_parallel_size=1)
This module would initialize the distributed model training and allows
users to set the number of tensor_parallel world size.
Parameters:
``tensor_model_parallel_size`` : This should set the number of tensor
parallel workers. Note the default value is set to 1
Other helper APIs:
''''''''''''''''''
- ``neuronx_distributed.parallel_state.get_data_parallel_size()`` :
Returns the data parallel world size depending on the number of
global workers and tensor parallel workers.
- ``neuronx_distributed.parallel_state.get_tensor_model_parallel_size()``
: Returns the tensor parallel world size.
- ``neuronx_distributed.parallel_state.get_tensor_model_parallel_rank()``
: Returns the rank of the worker within the tensor parallel group
- ``neuronx_distributed.parallel_state.get_data_parallel_rank()`` :
Returns the rank of the worker in the data parallel group.
- ``neuronx_distributed.parallel_state.get_data_parallel_group(as_list=False)``
: Returns the data parallel group after taking into account the
tensor parallel size and the global world size. as_list argument when
set to True, would return the group as a List[List] otherwise it
would return a torch.distributed.group.
- ``neuronx_distributed.parallel_state.get_tensor_model_parallel_group(as_list=False)``
: Returns the tensor parallel group after taking into account the
tensor parallel size and the global world size. as_list argument when
set to True, would return the group as a List[List] otherwise it
would return a torch.distributed.group.
- ``move_model_to_device(model, device)``: This api moves the model to device by
preserving tensor parallel attributes.
Parallel Layers:
^^^^^^^^^^^^^^^^
Majority of parameters within the transformer based model reside in the
Embedding and Linear layers. Hence, to reduce the number of parameters
on a single device because of these layers, we provided sharded
Embedding and Linear layers.
Parallel Embedding:
'''''''''''''''''''
::
class neuronx_distributed.parallel_layers.ParallelEmbedding(
num_embeddings, embedding_dim, init_method=init.normal_,
dtype=torch.float32, device=None)
This module is intended to replace torch.nn.Embedding . In cases where
the vocab size is too large, we can shard the Embedding table across
workers. Note: The embedding table would be sharded across all the
tensor-parallel workers.
.. _parameters-1:
Parameters:
- ``num_embeddings (int)`` : size of the dictionary of embeddings
- ``embedding_dim (int)`` : the size of each embedding vector
- ``init_method: (torch.nn.init)`` : Initialization function for the
embedding weights.
- ``dtype: (dtype)`` : Datatype for the weights
- ``device: (torch.device)`` : Device to initialize the weights on. By
default, the weights would be initialized on CPU
ColumnParallel Linear Layer:
''''''''''''''''''''''''''''
::
class neuronx_distributed.parallel_layers.ColumnParallelLinear(
input_size, output_size, bias=True, gather_output=True,
sequence_parallel_enabled=False, dtype=torch.float32, device=None)
This module would perform a Column wise partition of the weight matrix.
Linear layer is defined as ``Y = XA + b`` , here A is parallelized along
second dimension as ``A = [A_1, A_2 .... A_p]`` . ``Note``: This layer
is designed to operate on 3-dimensional inputs.
.. _parameters-2:
Parameters:
- ``input_size: (int)`` : First dimension of the weight matrix
- ``output_size: (int)`` : Second dimension of the weight matrix
- ``bias: (bool)``: If set to True, bias would be added
- ``gather_output: (bool)`` : If true, call all-gather on output and
make Y available to all Neuron devices, otherwise, every Neuron
device will have its output which is Y_i = XA_i
- ``sequence_parallel_enabled: (bool)`` : When sequence-parallel is enabled, it would
gather the inputs from the sequence parallel region and perform the forward and backward
passes
- ``dtype: (dtype)`` : Datatype for the weights
- ``device: (torch.device)`` : Device to initialize the weights on. By
default, the weights would be initialized on CPU
RowParallel Linear Layer:
'''''''''''''''''''''''''
::
class neuronx_distributed.parallel_layers.RowParallelLinear(
input_size, output_size, bias=True, input_is_parallel=False,
sequence_parallel_enabled=False, dtype=torch.float32, device=False
)
The linear layer is defined as ``Y = XA + b``. A is parallelized along
its first dimension and X along its second. ``Note``: This layer is
designed to operate on 3-dimensional inputs.
.. _parameters-3:
Parameters:
- ``input_size: (int)`` : First dimension of the weight matrix
- ``output_size: (int)`` : Second dimension of the weight matrix
- ``bias: (bool)`` : If set to True, bias would be added
- ``input_is_parallel: (bool)`` : If true, we assume that the input is
already split across the Neuron devices and we do not split again.
This is useful when we have a ColumnParallel Layer just before the
Row Parallel layer
- ``sequence_parallel_enabled: (bool)`` : When sequence-parallel is enabled, it would
gather the inputs from the sequence parallel region and perform the forward and backward
passes
- ``dtype: (dtype)`` : Datatype for the weights
- ``device: (torch.device)`` : Device to initialize the weights on. By
default, the weights would be initialized on CPU
Padding Tensor-Parallel Layers
''''''''''''''''''''''''''''''
::
def neuronx_distributed.parallel_layers.pad.pad_model(
model, tp_degree, n_heads, wrapped_classes=(), pad_hook_fn=None)
Pads a generic model to function to a desired tensor parallelism degree by padding the
number of attention heads. Returns the original model modified with padding.
Uses 1-axis padding strategy: pads the sharded dim of the ParallelLinear layers to the
size it would have been for the padded number of heads.
.. _parameters-4:
Parameters:
- ``model (torch.nn.Module)`` : model to be padded
- ``tp_degree (int)`` : tensor parallel degree
- ``n_heads (int)`` : the number of heads the given model to be padded has. This can
typically be found in the config
- ``wrapped_classes (Tuple[any], *optional*, defaults to `()`)`` : tuple of classes
(and their submodules) which should be padded
- ``pad_hook_fn (Callable[any, float], *optional*, defaults to `None`)`` : a hook
function that is called whenever encountering a class to pad. Receives an instance
of the class to pad and the tgt_src_ratio (num_heads_padded / num_heads)as its argument
Usage:
When modifying the Attention layer, typically you must divide by TP degree like so:
::
self.num_heads = neuronx_dist_utils.divide(self.num_heads, get_tensor_model_parallel_size())
This line must be modified like so:
::
self.num_heads = neuronx_dist_utils.divide(
self.num_heads + get_number_of_extra_heads(self.num_heads, get_tensor_model_parallel_size()),
get_tensor_model_parallel_size())
Then, after initializing the model, you must call this wrapper:
::
model = get_model(config=desired_config)
model = pad_model(model, tp_degree=32, desired_config.num_heads) # Use the model as desired after this point
You can specify a specific layer or class for your model to pad, so you aren't unnecessarily padding.
Typically, this layer will be your Attention layer
::
model = pad_model(model, tp_degree=32, desired_config.num_heads, wrapped_classes=[MyAttention])
You can also specify a pad_hook_fn, to be called whenever encountering an instance of wrapped_class,
passing in said instance as a parameter, along with the tgt_src_ratio (num_heads_padded / num_heads).
::
def my_hook(attention_to_pad, tgt_src_ratio):
attention_to_pad.split_size = int(model.split_size * tgt_src_ratio)
model = pad_model(
model,
tp_degree=32,
desired_config.num_heads,
wrapped_classes=[MyAttention],
pad_hook_fn=my_hook
)
Loss functions:
''''''''''''''''''
When you shard the final MLP layer using tensor-parallelism, instead of
recollecting all the outputs from each TP rank, we can use the
ParallelCrossEntropy loss function. This function would take the parallel
logits produced by final parallel MLP and produce a loss by taking into
account that the logits are sharded across multiple workers.
::
def neuronx_distributed.parallel_layers.loss_functions.parallel_cross_entropy(
parallel_logits, labels, label_smoothing=0.0)
.. _parameters-6:
Parameters:
- ``parallel_logits (Tensor)`` : Sharded logits from the previous MLP
- ``labels (Tensor)`` : Label for each token. Labels should not be sharded,
and the parallel_cross_entropy would take care of sharding the labels internally
- ``label_smoothing (float)`` : A float in [0.0, 1.0]. Specifies the amount of
smoothing when computing the loss, where 0.0 means no smoothing
Checkpointing:
^^^^^^^^^^^^^^
These are set of APIs for saving and loading the checkpoint. These APIs
take care of saving and loading the shard depending the tensor parallel
rank of the worker.
Save Checkpoint:
''''''''''''''''
::
def neuronx_distributed.parallel_layers.save(state_dict, save_dir, save_serially = True, down_cast_bf16 = False)
This API will save the model from each tensor-parallel rank in the
save_dir . Only workers with data parallel rank equal to 0 would be
saving the checkpoints. Each tensor parallel rank would be creating a
``tp_rank_i`` folder inside ``save_dir`` and each ones saves its shard
in the ``tp_rank_i`` folder.
.. _parameters-4:
Parameters:
- ``state_dict: (dict)`` : Model state dict. Its the same dict that you
would save using torch.save
- ``save_dir: (str)`` : Model save directory.
- ``save_serially: (bool)``: This flag would save checkpoints one data-parallel rank at a time.
This is particularly useful when we are checkpointing large models.
- ``down_cast_bf16: (bool)``: This flag would downcast the state_dict to bf16 before saving.
Load Checkpoint
'''''''''''''''
::
def neuronx_distributed.parallel_layers.load(
load_dir, model=None, model_key='model', sharded=True)
This API will automatically load checkpoint depending on the tensor
parallel rank. For large models, one should pass the model object to the
load API to load the weights directly into the model. This could avoid
host OOM, as the load API would load the checkpoints for one tensor
parallel rank at a time.
.. _parameters-5:
Parameters:
- ``load_dir: (str)`` : Directory where the checkpoint is saved.
- ``model``: (torch.nn.Module): Model object
- ``model_key: (str)`` :The model key used when saving the model in the
state_dict.
- ``sharded: (bool)`` : If the checkpoint is not sharded, pass False.
This is useful (especially during inference) when the model is
trained using a different strategy and you end up saving a single
unsharded checkpoint. You can then load this unsharded checkpoint
onto the sharded model. When this attribute is set to ``False`` , it
is necessary to pass the model object. Note: The keys in the
state-dict should have the same name as in the model object, else it
would raise an error.
Gradient Clipping:
''''''''''''''''''
With tensor parallelism, we need to handle the gradient clipping as we
have to accumulate the total norm from all the tensor parallel ranks.
This should be handled by the following API
::
def neuronx_distributed.parallel_layers.clip_grad_norm(
parameters, max_norm, norm_type=2)
.. _parameters-6:
Parameters:
- ``parameters (Iterable[Tensor] or Tensor)`` : an iterable of Tensors
or a single Tensor that will have gradients normalized
- ``max_norm (float or int)`` :max norm of the gradients
- ``norm_type (float or int)`` : type of the used p-norm. Can be ‘inf’
for infinity norm.
Neuron Zero1 Optimizer:
'''''''''''''''''''''''
In Neuronx-Distributed, we built a wrapper on the Zero1-Optimizer present in torch-xla.
::
class NeuronZero1Optimizer(Zero1Optimizer)
This wrapper takes into account the tensor-parallel degree and computes the grad-norm
accordingly. It also provides two APIs: save_sharded_state_dict and load_sharded_state_dict.
As the size of the model grows, saving the optimizer state from a single rank can result in OOMs.
Hence, the api to save_sharded_state_dict can allow saving states from each data-parallel rank. To
load this sharded optimizer state, there is a corresponding load_sharded_state_dict that allows each
rank to pick its corresponding shard from the checkpoint directory.
::
optimizer_grouped_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(nd in n for nd in no_decay)
],
"weight_decay": 0.01,
},
{
"params": [
p for n, p in param_optimizer if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
optimizer = NeuronZero1Optimizer(
optimizer_grouped_parameters,
AdamW,
lr=flags.lr,
pin_layout=False,
sharding_groups=parallel_state.get_data_parallel_group(as_list=True),
grad_norm_groups=parallel_state.get_tensor_model_parallel_group(as_list=True),
)
The interface is same as Zero1Optimizer in torch-xla
::
save_sharded_state_dict(output_dir, save_serially = True)
.. _parameters-7:
Parameters:
- ``output_dir (str)`` : Checkpoint directory where the sharded optimizer states need to be saved
- ``save_serially (bool)`` : Whether to save the states one data-parallel rank at a time. This is
especially useful when we want to checkpoint large models.
::
load_sharded_state_dict(output_dir, num_workers_per_step = 8)
.. _parameters-8:
Parameters:
- ``output_dir (str)`` : Checkpoint directory where the sharded optimizer states are saved
- ``num_workers_per_step (int)`` : This argument controls how many workers are doing model load
in parallel.
Model Trace:
^^^^^^^^^^^^
We can use the tensor parallel layers to perform large model inference
too. For performing inference, we can re-use the Parallel model built
above for training and then use the trace APIs provided by the
neuronx_distributed package to trace it for inference. One can use the
following set of APIs for running distributed inference:
::
def neuronx_distributed.trace.parallel_model_trace(func, inputs, tp_degree=1)
This API would launch tensor parallel workers, where each worker would
trace its own model. These traced models would be wrapped with a single
TensorParallelModel module which can then be used like any other traced
model.
.. _parameters-9:
Parameters:
- ``func : (Function)``: This is a function that returns a ``Model``
object and a dictionary of states. The ``parallel_model_trace`` API would call this function
inside each worker and run trace against them. Note: This differs
from the ``torch_neuronx.trace`` where the ``torch_neuronx.trace``
requires a model object to be passed.
- ``inputs: (torch tensors)`` : The inputs that needs to be passed to
the model.
- ``tp_degree: (int)`` : How many devices to be used when performing
tensor parallel sharding
Trace Model Save/Load:
^^^^^^^^^^^^^^^^^^^^^^
Save:
'''''
::
def neuronx_distributed.trace.parallel_model_save(model, save_dir)
This API should save the traced model in save_dir . Each shard would be
saved in its respective directory inside the save_dir. Parameters:
- ``model: (TensorParallelModel)`` : Traced model produced using the
parallel_model_trace api.
- ``save_dir: (str)`` : The directory where the model would be saved
Load:
'''''
::
def neuronx_distributed.trace.parallel_model_load(load_dir)
This API will load the sharded traced model into ``TensorParallelModel``
for inference.
.. _parameters-10:
Parameters:
'''''''''''
- ``load_dir: (str)`` : Directory which contains the traced model.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _api_guide:
API Reference Guide (``neuronx-distributed`` )
======================================================================
Neuronx-Distributed is XLA based library for distributed training and inference.
As part of this library, currently we support 2D parallelism: Tensor-Parallelism
and DataParallelism. We also support Zero1 optimizer to shard the optimizer weights.
To support tensor-parallelism on Neuron, we adopted the Apex Library
built for CUDA devices. We modified the implementations to work with
XLA. This document enlist the different APIs and modules provided by the library
Parallel Model State:
^^^^^^^^^^^^^^^^^^^^^
Initialize Model Parallelism:
'''''''''''''''''''''''''''''
::
def neuronx_distributed.parallel_state.initialize_model_parallel(
tensor_model_parallel_size=1)
This module would initialize the distributed model training and allows
users to set the number of tensor_parallel world size.
Parameters:
``tensor_model_parallel_size`` : This should set the number of tensor
parallel workers. Note the default value is set to 1
Other helper APIs:
''''''''''''''''''
- ``neuronx_distributed.parallel_state.get_data_parallel_size()`` :
Returns the data parallel world size depending on the number of
global workers and tensor parallel workers.
- ``neuronx_distributed.parallel_state.get_tensor_model_parallel_size()``
: Returns the tensor parallel world size.
- ``neuronx_distributed.parallel_state.get_tensor_model_parallel_rank()``
: Returns the rank of the worker within the tensor parallel group
- ``neuronx_distributed.parallel_state.get_data_parallel_rank()`` :
Returns the rank of the worker in the data parallel group.
- ``neuronx_distributed.parallel_state.get_data_parallel_group(as_list=False)``
: Returns the data parallel group after taking into account the
tensor parallel size and the global world size. as_list argument when
set to True, would return the group as a List[List] otherwise it
would return a torch.distributed.group.
- ``neuronx_distributed.parallel_state.get_tensor_model_parallel_group(as_list=False)``
: Returns the tensor parallel group after taking into account the
tensor parallel size and the global world size. as_list argument when
set to True, would return the group as a List[List] otherwise it
would return a torch.distributed.group.
- ``move_model_to_device(model, device)``: This api moves the model to device by
preserving tensor parallel attributes.
Parallel Layers:
^^^^^^^^^^^^^^^^
Majority of parameters within the transformer based model reside in the
Embedding and Linear layers. Hence, to reduce the number of parameters
on a single device because of these layers, we provided sharded
Embedding and Linear layers.
Parallel Embedding:
'''''''''''''''''''
::
class neuronx_distributed.parallel_layers.ParallelEmbedding(
num_embeddings, embedding_dim, init_method=init.normal_,
dtype=torch.float32, device=None)
This module is intended to replace torch.nn.Embedding . In cases where
the vocab size is too large, we can shard the Embedding table across
workers. Note: The embedding table would be sharded across all the
tensor-parallel workers.
.. _parameters-1:
Parameters:
- ``num_embeddings (int)`` : size of the dictionary of embeddings
- ``embedding_dim (int)`` : the size of each embedding vector
- ``init_method: (torch.nn.init)`` : Initialization function for the
embedding weights.
- ``dtype: (dtype)`` : Datatype for the weights
- ``device: (torch.device)`` : Device to initialize the weights on. By
default, the weights would be initialized on CPU
ColumnParallel Linear Layer:
''''''''''''''''''''''''''''
::
class neuronx_distributed.parallel_layers.ColumnParallelLinear(
input_size, output_size, bias=True, gather_output=True,
sequence_parallel_enabled=False, dtype=torch.float32, device=None)
This module would perform a Column wise partition of the weight matrix.
Linear layer is defined as ``Y = XA + b`` , here A is parallelized along
second dimension as ``A = [A_1, A_2 .... A_p]`` . ``Note``: This layer
is designed to operate on 3-dimensional inputs.
.. _parameters-2:
Parameters:
- ``input_size: (int)`` : First dimension of the weight matrix
- ``output_size: (int)`` : Second dimension of the weight matrix
- ``bias: (bool)``: If set to True, bias would be added
- ``gather_output: (bool)`` : If true, call all-gather on output and
make Y available to all Neuron devices, otherwise, every Neuron
device will have its output which is Y_i = XA_i
- ``sequence_parallel_enabled: (bool)`` : When sequence-parallel is enabled, it would
gather the inputs from the sequence parallel region and perform the forward and backward
passes
- ``dtype: (dtype)`` : Datatype for the weights
- ``device: (torch.device)`` : Device to initialize the weights on. By
default, the weights would be initialized on CPU
RowParallel Linear Layer:
'''''''''''''''''''''''''
::
class neuronx_distributed.parallel_layers.RowParallelLinear(
input_size, output_size, bias=True, input_is_parallel=False,
sequence_parallel_enabled=False, dtype=torch.float32, device=False
)
The linear layer is defined as ``Y = XA + b``. A is parallelized along
its first dimension and X along its second. ``Note``: This layer is
designed to operate on 3-dimensional inputs.
.. _parameters-3:
Parameters:
- ``input_size: (int)`` : First dimension of the weight matrix
- ``output_size: (int)`` : Second dimension of the weight matrix
- ``bias: (bool)`` : If set to True, bias would be added
- ``input_is_parallel: (bool)`` : If true, we assume that the input is
already split across the Neuron devices and we do not split again.
This is useful when we have a ColumnParallel Layer just before the
Row Parallel layer
- ``sequence_parallel_enabled: (bool)`` : When sequence-parallel is enabled, it would
gather the inputs from the sequence parallel region and perform the forward and backward
passes
- ``dtype: (dtype)`` : Datatype for the weights
- ``device: (torch.device)`` : Device to initialize the weights on. By
default, the weights would be initialized on CPU
Padding Tensor-Parallel Layers
''''''''''''''''''''''''''''''
::
def neuronx_distributed.parallel_layers.pad.pad_model(
model, tp_degree, n_heads, wrapped_classes=(), pad_hook_fn=None)
Pads a generic model to function to a desired tensor parallelism degree by padding the
number of attention heads. Returns the original model modified with padding.
Uses 1-axis padding strategy: pads the sharded dim of the ParallelLinear layers to the
size it would have been for the padded number of heads.
.. _parameters-4:
Parameters:
- ``model (torch.nn.Module)`` : model to be padded
- ``tp_degree (int)`` : tensor parallel degree
- ``n_heads (int)`` : the number of heads the given model to be padded has. This can
typically be found in the config
- ``wrapped_classes (Tuple[any], *optional*, defaults to `()`)`` : tuple of classes
(and their submodules) which should be padded
- ``pad_hook_fn (Callable[any, float], *optional*, defaults to `None`)`` : a hook
function that is called whenever encountering a class to pad. Receives an instance
of the class to pad and the tgt_src_ratio (num_heads_padded / num_heads)as its argument
Usage:
When modifying the Attention layer, typically you must divide by TP degree like so:
::
self.num_heads = neuronx_dist_utils.divide(self.num_heads, get_tensor_model_parallel_size())
This line must be modified like so:
::
self.num_heads = neuronx_dist_utils.divide(
self.num_heads + get_number_of_extra_heads(self.num_heads, get_tensor_model_parallel_size()),
get_tensor_model_parallel_size())
Then, after initializing the model, you must call this wrapper:
::
model = get_model(config=desired_config)
model = pad_model(model, tp_degree=32, desired_config.num_heads) # Use the model as desired after this point
You can specify a specific layer or class for your model to pad, so you aren't unnecessarily padding.
Typically, this layer will be your Attention layer
::
model = pad_model(model, tp_degree=32, desired_config.num_heads, wrapped_classes=[MyAttention])
You can also specify a pad_hook_fn, to be called whenever encountering an instance of wrapped_class,
passing in said instance as a parameter, along with the tgt_src_ratio (num_heads_padded / num_heads).
::
def my_hook(attention_to_pad, tgt_src_ratio):
attention_to_pad.split_size = int(model.split_size * tgt_src_ratio)
model = pad_model(
model,
tp_degree=32,
desired_config.num_heads,
wrapped_classes=[MyAttention],
pad_hook_fn=my_hook
)
Loss functions:
''''''''''''''''''
When you shard the final MLP layer using tensor-parallelism, instead of
recollecting all the outputs from each TP rank, we can use the
ParallelCrossEntropy loss function. This function would take the parallel
logits produced by final parallel MLP and produce a loss by taking into
account that the logits are sharded across multiple workers.
::
def neuronx_distributed.parallel_layers.loss_functions.parallel_cross_entropy(
parallel_logits, labels, label_smoothing=0.0)
.. _parameters-6:
Parameters:
- ``parallel_logits (Tensor)`` : Sharded logits from the previous MLP
- ``labels (Tensor)`` : Label for each token. Labels should not be sharded,
and the parallel_cross_entropy would take care of sharding the labels internally
- ``label_smoothing (float)`` : A float in [0.0, 1.0]. Specifies the amount of
smoothing when computing the loss, where 0.0 means no smoothing
Checkpointing:
^^^^^^^^^^^^^^
These are set of APIs for saving and loading the checkpoint. These APIs
take care of saving and loading the shard depending the tensor parallel
rank of the worker.
Save Checkpoint:
''''''''''''''''
::
def neuronx_distributed.parallel_layers.save(state_dict, save_dir, save_serially = True, down_cast_bf16 = False)
This API will save the model from each tensor-parallel rank in the
save_dir . Only workers with data parallel rank equal to 0 would be
saving the checkpoints. Each tensor parallel rank would be creating a
``tp_rank_i`` folder inside ``save_dir`` and each ones saves its shard
in the ``tp_rank_i`` folder.
.. _parameters-4:
Parameters:
- ``state_dict: (dict)`` : Model state dict. Its the same dict that you
would save using torch.save
- ``save_dir: (str)`` : Model save directory.
- ``save_serially: (bool)``: This flag would save checkpoints one data-parallel rank at a time.
This is particularly useful when we are checkpointing large models.
- ``down_cast_bf16: (bool)``: This flag would downcast the state_dict to bf16 before saving.
Load Checkpoint
'''''''''''''''
::
def neuronx_distributed.parallel_layers.load(
load_dir, model=None, model_key='model', sharded=True)
This API will automatically load checkpoint depending on the tensor
parallel rank. For large models, one should pass the model object to the
load API to load the weights directly into the model. This could avoid
host OOM, as the load API would load the checkpoints for one tensor
parallel rank at a time.
.. _parameters-5:
Parameters:
- ``load_dir: (str)`` : Directory where the checkpoint is saved.
- ``model``: (torch.nn.Module): Model object
- ``model_key: (str)`` :The model key used when saving the model in the
state_dict.
- ``sharded: (bool)`` : If the checkpoint is not sharded, pass False.
This is useful (especially during inference) when the model is
trained using a different strategy and you end up saving a single
unsharded checkpoint. You can then load this unsharded checkpoint
onto the sharded model. When this attribute is set to ``False`` , it
is necessary to pass the model object. Note: The keys in the
state-dict should have the same name as in the model object, else it
would raise an error.
Gradient Clipping:
''''''''''''''''''
With tensor parallelism, we need to handle the gradient clipping as we
have to accumulate the total norm from all the tensor parallel ranks.
This should be handled by the following API
::
def neuronx_distributed.parallel_layers.clip_grad_norm(
parameters, max_norm, norm_type=2)
.. _parameters-6:
Parameters:
- ``parameters (Iterable[Tensor] or Tensor)`` : an iterable of Tensors
or a single Tensor that will have gradients normalized
- ``max_norm (float or int)`` :max norm of the gradients
- ``norm_type (float or int)`` : type of the used p-norm. Can be ‘inf’
for infinity norm.
Neuron Zero1 Optimizer:
'''''''''''''''''''''''
In Neuronx-Distributed, we built a wrapper on the Zero1-Optimizer present in torch-xla.
::
class NeuronZero1Optimizer(Zero1Optimizer)
This wrapper takes into account the tensor-parallel degree and computes the grad-norm
accordingly. It also provides two APIs: save_sharded_state_dict and load_sharded_state_dict.
As the size of the model grows, saving the optimizer state from a single rank can result in OOMs.
Hence, the api to save_sharded_state_dict can allow saving states from each data-parallel rank. To
load this sharded optimizer state, there is a corresponding load_sharded_state_dict that allows each
rank to pick its corresponding shard from the checkpoint directory.
::
optimizer_grouped_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(nd in n for nd in no_decay)
],
"weight_decay": 0.01,
},
{
"params": [
p for n, p in param_optimizer if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
optimizer = NeuronZero1Optimizer(
optimizer_grouped_parameters,
AdamW,
lr=flags.lr,
pin_layout=False,
sharding_groups=parallel_state.get_data_parallel_group(as_list=True),
grad_norm_groups=parallel_state.get_tensor_model_parallel_group(as_list=True),
)
The interface is same as Zero1Optimizer in torch-xla
::
save_sharded_state_dict(output_dir, save_serially = True)
.. _parameters-7:
Parameters:
- ``output_dir (str)`` : Checkpoint directory where the sharded optimizer states need to be saved
- ``save_serially (bool)`` : Whether to save the states one data-parallel rank at a time. This is
especially useful when we want to checkpoint large models.
::
load_sharded_state_dict(output_dir, num_workers_per_step = 8)
.. _parameters-8:
Parameters:
- ``output_dir (str)`` : Checkpoint directory where the sharded optimizer states are saved
- ``num_workers_per_step (int)`` : This argument controls how many workers are doing model load
in parallel.
Model Trace:
^^^^^^^^^^^^
We can use the tensor parallel layers to perform large model inference
too. For performing inference, we can re-use the Parallel model built
above for training and then use the trace APIs provided by the
neuronx_distributed package to trace it for inference. One can use the
following set of APIs for running distributed inference:
::
def neuronx_distributed.trace.parallel_model_trace(func, inputs, tp_degree=1)
This API would launch tensor parallel workers, where each worker would
trace its own model. These traced models would be wrapped with a single
TensorParallelModel module which can then be used like any other traced
model.
.. _parameters-9:
Parameters:
- ``func : (Function)``: This is a function that returns a ``Model``
object and a dictionary of states. The ``parallel_model_trace`` API would call this function
inside each worker and run trace against them. Note: This differs
from the ``torch_neuronx.trace`` where the ``torch_neuronx.trace``
requires a model object to be passed.
- ``inputs: (torch tensors)`` : The inputs that needs to be passed to
the model.
- ``tp_degree: (int)`` : How many devices to be used when performing
tensor parallel sharding
Trace Model Save/Load:
^^^^^^^^^^^^^^^^^^^^^^
Save:
'''''
::
def neuronx_distributed.trace.parallel_model_save(model, save_dir)
This API should save the traced model in save_dir . Each shard would be
saved in its respective directory inside the save_dir. Parameters:
- ``model: (TensorParallelModel)`` : Traced model produced using the
parallel_model_trace api.
- ``save_dir: (str)`` : The directory where the model would be saved
Load:
'''''
::
def neuronx_distributed.trace.parallel_model_load(load_dir)
This API will load the sharded traced model into ``TensorParallelModel``
for inference.
.. _parameters-10:
Parameters:
'''''''''''
- ``load_dir: (str)`` : Directory which contains the traced model.</pre></body></html>
|
2023-09-29T20:54:56.355Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tensor_parallelism_overview.rst.txt
|
```
.. _tensor_parallelism_overview:
Tensor Parallelism Overview
===========================
Tensor Parallelism is a technique in which a tensor is split into N
chunks along a particular dimension such that each device only holds 1/N
chunk of the tensor. Computation is performed using this partial chunk
so as to get partial output. These partial outputs are collected from
all devices ensuring the correctness of the computation is maintained.
Taking a general matrix multiplication as an example, let’s say we have
C = AB. We can split B along the column dimension into [B0 B1 B2 … Bn]
and each device holds a column. We then multiply A with each column in B
on each device, we will get [AB0 AB1 AB2 … ABn]. At this moment, each
device still holds partial results, e.g. device rank 0 holds AB0. To
make sure the result is correct, we need to all-gather the partial
result and concatenate the tensor along the column dimension. In this
way, we are able to distribute the tensor over devices while making sure
the computation flow remains correct.
.. image:: images/tp.png
:alt: Image: image.png
Fig and TP explanation is borrowed from https://colossalai.org/docs/concepts/paradigms_of_parallelism/#tensor-parallel
Similarly we can perform the partition along the row dimensions and
create a RowParallel Linear layer. In RowParallelLinear layer, we
partition the weight matrix along the row dimension. Let’s say we have C
= AB. We can split B along the row dimension into [B0 B1 B2 … Bn] and
each device holds a row. We then multiply each column of A on each
device, we will get [A0B0 A1B1 A2B2 … AnBn]. At this moment, each device
still holds partial results, e.g. device rank 0 holds A0B0. To make sure
the result is correct, we need to all-reduce sum the partial result from
all devices to produce the final output.
Using this principle of sharded linear layers, we can construct MLPs of
arbitrary depth until the need to operate on the whole output tensor, in
which case we would have to construct the output but gathering it from
all devices.
.. image:: images/mlp.png
:alt: Image: image.png
Here is an illustration from the Megatron-LM paper In the above case, as
you can see two linear layers are implemented using Column Parallel and
Row Parallel linear layers, wherein the ColumnParallel Linear shards
along the columns and then it is followed by RowParallel Linear layer
which takes in parallel inputs (sharded outputs from
ColumnParallelLinear). Consider the example shown in the above diagram,
Z = (X\ *A)*\ B. In this case we split the first matrix multiplication
over column dimension such that each device after first matrix
multiplication holds partial result of Y0=XA0,Y1=XA1 and so on. For the
second matrix multiplication, we partition the weight matrix over row
dimension and since the inputs are already columns sharded and we can
multiply them to produce partial outputs. These outputs finally requires
an all-reduce sum, since we want to sum up the single column*row result.
Tensor Parallelism for Transformers:
A transformer block
.. image:: images/self-attention.png
:alt: Image: image.png
Fig: Taken from Megatron-LM paper.
As seen from the figure above, a simple self attention block has the QKV linear layer followed by MLP.
Using the same Column and Row Parallel linear layers, we can partition
the self-attention block across devices thereby reducing the memory
footprint on each device, since each device now only holds partial
parameters. This weight distribution strategy allows us to scale large
model training across devices.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tensor_parallelism_overview:
Tensor Parallelism Overview
===========================
Tensor Parallelism is a technique in which a tensor is split into N
chunks along a particular dimension such that each device only holds 1/N
chunk of the tensor. Computation is performed using this partial chunk
so as to get partial output. These partial outputs are collected from
all devices ensuring the correctness of the computation is maintained.
Taking a general matrix multiplication as an example, let’s say we have
C = AB. We can split B along the column dimension into [B0 B1 B2 … Bn]
and each device holds a column. We then multiply A with each column in B
on each device, we will get [AB0 AB1 AB2 … ABn]. At this moment, each
device still holds partial results, e.g. device rank 0 holds AB0. To
make sure the result is correct, we need to all-gather the partial
result and concatenate the tensor along the column dimension. In this
way, we are able to distribute the tensor over devices while making sure
the computation flow remains correct.
.. image:: images/tp.png
:alt: Image: image.png
Fig and TP explanation is borrowed from https://colossalai.org/docs/concepts/paradigms_of_parallelism/#tensor-parallel
Similarly we can perform the partition along the row dimensions and
create a RowParallel Linear layer. In RowParallelLinear layer, we
partition the weight matrix along the row dimension. Let’s say we have C
= AB. We can split B along the row dimension into [B0 B1 B2 … Bn] and
each device holds a row. We then multiply each column of A on each
device, we will get [A0B0 A1B1 A2B2 … AnBn]. At this moment, each device
still holds partial results, e.g. device rank 0 holds A0B0. To make sure
the result is correct, we need to all-reduce sum the partial result from
all devices to produce the final output.
Using this principle of sharded linear layers, we can construct MLPs of
arbitrary depth until the need to operate on the whole output tensor, in
which case we would have to construct the output but gathering it from
all devices.
.. image:: images/mlp.png
:alt: Image: image.png
Here is an illustration from the Megatron-LM paper In the above case, as
you can see two linear layers are implemented using Column Parallel and
Row Parallel linear layers, wherein the ColumnParallel Linear shards
along the columns and then it is followed by RowParallel Linear layer
which takes in parallel inputs (sharded outputs from
ColumnParallelLinear). Consider the example shown in the above diagram,
Z = (X\ *A)*\ B. In this case we split the first matrix multiplication
over column dimension such that each device after first matrix
multiplication holds partial result of Y0=XA0,Y1=XA1 and so on. For the
second matrix multiplication, we partition the weight matrix over row
dimension and since the inputs are already columns sharded and we can
multiply them to produce partial outputs. These outputs finally requires
an all-reduce sum, since we want to sum up the single column*row result.
Tensor Parallelism for Transformers:
A transformer block
.. image:: images/self-attention.png
:alt: Image: image.png
Fig: Taken from Megatron-LM paper.
As seen from the figure above, a simple self attention block has the QKV linear layer followed by MLP.
Using the same Column and Row Parallel linear layers, we can partition
the self-attention block across devices thereby reducing the memory
footprint on each device, since each device now only holds partial
parameters. This weight distribution strategy allows us to scale large
model training across devices.
</pre></body></html>
|
2023-09-29T20:54:56.406Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/setup/index.rst.txt
|
```
.. _neuronx_distributed_setup:
Neuron Distributed Setup (``neuronx-distributed``)
==================================================
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>` to create a pytorch environment. It is recommended to work out of python
virtual env so as to avoid package installation issues.
You can install the ``neuronx-distributed`` package using the following command:
.. code:: ipython3
python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com
Make sure the transformers version is set to ``4.26.0``
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx_distributed_setup:
Neuron Distributed Setup (``neuronx-distributed``)
==================================================
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>` to create a pytorch environment. It is recommended to work out of python
virtual env so as to avoid package installation issues.
You can install the ``neuronx-distributed`` package using the following command:
.. code:: ipython3
python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com
Make sure the transformers version is set to ``4.26.0``
</pre></body></html>
|
2023-09-29T20:54:56.412Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/torch/transformers-neuronx/index.rst.txt
|
```
.. _OPT: https://huggingface.co/docs/transformers/model_doc/opt
.. _GPT2: https://huggingface.co/docs/transformers/model_doc/gpt2
.. _GPT-J: https://huggingface.co/docs/transformers/model_doc/gptj
.. _Tensor-parallelism-support: https://github.com/aws-neuron/transformers-neuronx/blob/main/README.md#tensor-parallelism-support
.. _features-support: https://github.com/aws-neuron/transformers-neuronx/blob/main/README.md#Currently-supported-models-and-features
.. |generate| replace:: :py:meth:`~transformers.generation_utils.GenerationMixin.generate`
.. |beam_search| replace:: :meth:`~transformers.generation_utils.GenerationMixin.beam_search`
.. |sample| replace:: :meth:`~transformers.generation_utils.GenerationMixin.sample`
.. |greedy_search| replace:: :meth:`~transformers.generation_utils.GenerationMixin.greedy_search`
.. |Trn1| replace:: :ref:`Trn1 <aws-trn1-arch>`
.. |Inf2| replace:: :ref:`Inf2 <aws-inf2-arch>`
.. _transformers-neuronx-rn:
Transformers Neuron (``transformers-neuronx``) release notes
============================================================
.. contents:: Table of Contents
:local:
:depth: 1
Transformers Neuron for |Trn1|/|Inf2| is a software package that enables
PyTorch users to perform large language model (LLM) inference on
second-generation Neuron hardware (See: :ref:`NeuronCore-v2 <neuroncores-v2-arch>`).
Model support status
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Definition of model support status
----------------------------------
- Prototype (Alpha): An initial in-development version of a model that should be considered a preview of future functionality. A prototype may not be fully functional. A prototype model is not expected to perform well and may also have known accuracy issues. Prototype models may not maintain compatibility across versions.
- Experimental (Beta): A functional model which may still need performance & accuracy tuning. An experimental model should produce accurate results in most cases but is not yet considered stable. Prototype models may not maintain compatibility across versions.
- Stable: A model which has been validated for both accuracy and performance. Breaking changes to a stable models will occur with a deprecation notice in advance.
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Model Support
- Functional
- Performance Tuned
- Backwards Compatibility
* - Prototype
- No
- No
- No
* - Experimental
- Yes
- No
- No
* - Stable
- Yes
- Yes
- Yes
Current model support status
-----------------------------
- `BLOOM <https://huggingface.co/docs/transformers/model_doc/bloom>`__: [Experimental]
- `GPT2 <https://huggingface.co/docs/transformers/model_doc/gpt2>`__: [Experimental]
- `GPT-J <https://huggingface.co/docs/transformers/model_doc/gptj>`__: [Experimental]
- `GPT-Neox <https://huggingface.co/docs/transformers/model_doc/gpt_neox>`__: [Experimental]
- `LLaMA <https://huggingface.co/docs/transformers/main/model_doc/llama>`__: [Experimental]
- `LLaMA 2 <https://huggingface.co/docs/transformers/main/model_doc/llama2>`__: [Experimental]
- `OPT <https://huggingface.co/docs/transformers/model_doc/opt>`__: [Experimental]
--------------------------
Model features
--------------------------
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Model
- Flexible Tensor Parallelism
- Prompt Estimate Support
- Serialization Support
* - BLOOM
- Yes
- Yes
- No
* - GPT2
- Yes
- Partial
- Partial
* - GPT-J
- No
- No
- No
* - GPT-NeoX
- No
- No
- No
* - LLaMA
- Yes
- Yes
- No
* - LLaMA 2
- Yes
- Yes
- No
* - OPT
- Yes
- No
- No
Release [0.7.84]
----------------------
Date: 09/15/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- Use the ``--model-type=transformer`` compiler flag by default for all models. This flag improves performance and compilation time for all models. This flag replaces the ``--model-type=transformer-inference`` flag, which is now depracated.
Resolved Issues
~~~~~~~~~~~~~~~
- Fixed an issue where the ``HuggingFaceGenerationModelAdapter`` class falls back to serial context encoding for models that have parallel context encoding (``GPT2ForSamplingWithContextBroadcasting``, ``LlamaForSampling``, etc.)
- [GPT2 / OPT] Fixed an issue in the parallel context encoding network where incorrect results could be generated due to incorrect masking logic.
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Some configurations of LLaMA and LLaMA-2 inference models fail compilation with the error ``IndirectLoad/Save requires contiguous indirect access per partition``. This is fixed in the compiler version 2.10.0.35 (Neuron SDK 2.14.1).
- Some configurations of LLaMA and LLaMA-2 inference model fail compilation with the error ``Too many instructions after unroll for function sg0000``. To mitigate this, please try with ``-O1`` compiler option (or ``--optlevel 1``) by adding ``os.environ["NEURON_CC_FLAGS"] = "-O1"`` to your script or set in the environment. A complete fix will be coming in the future release which will not require this option. Note: Using -O1 in the LLaMA-2 13B tutorial results in about 50% increase in latency compared to Neuron SDK 2.13.2. If this is not acceptable, please use compiler version from Neuron SDK 2.13.2.
Release [0.6.106]
----------------------
Date: 08/28/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- [Experimental] Added support for LLaMA 2 (excluding grouped/multi-query versions, such as LLaMA 2 70b)
- [Experimental] Improved the performance of BLOOM and LLaMA models
- Reduced execution latency of token generation in tensor parallel models by improving thread synchronization. (supported in LLaMA only)
- Added an optimized vector implementation of RoPE positional embedding. (supported in LLaMA only)
- Added support for faster context encoding on sequences of varying lengths. This is implemented by allowing multiple buckets for parallel context encoding. During inference the best fit bucket is chosen. (supported in LLaMA/GPT-2 only)
- Added the Neuron Persistent Cache for compilation to automatically load pre-compiled model artifacts. (supported by all models)
- Improved compilation time by compiling models used for different sequence length buckets in parallel. (not supported in GPT-NeoX/GPT-J)
Resolved Issues
~~~~~~~~~~~~~~~
- [LLaMA] Fixed an issue in the parallel context encoding network where incorrect results could be generated if the context length is shorter than the context length estimate
- [GPT2 / OPT] Fixed an issue in the parallel context encoding network where incorrect results could be generated
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- The ``HuggingFaceGenerationModelAdapter`` class currently falls back to serial context encoding for models that have parallel context encoding (``GPT2ForSamplingWithContextBroadcasting``, ``LlamaForSampling``, etc. )
- Beam search can introduce memory issues for large models
- There can be accuracy issues for the GPT-J model for certain use-cases
Release [0.5.58]
----------------------
Date: 7/21/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- [Experimental] Added support for GPT-NeoX models.
- [Experimental] Added support for BLOOM models.
- [Prototype] Added support for LLaMA models.
- Added support for more flexible tensor-parallel configurations to GPT2, OPT, and BLOOM. The attention heads doesn't need to be evenly divisible by `tp_degree` anymore. (Note: The `tp_degree` still needs to satisfy the runtime topologies constraint for collective communication (i.e Allreduce). For more details on supported topologies, see: `Tensor-parallelism-support`_ and https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-features/collective-communication.html.)
- Added multi-query / multi-group attention support for GPT2.
Resolved Issues
~~~~~~~~~~~~~~~
- Fixed NaN issues for GPT2 model.
- Fixed OPT/GPT-NeoX gibberish output.
- Resolved an issue where NaN values could be produced when the context_length argument was used in GPT2/OPT.
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Missing cache reorder support for beam search.
- For more info, please see `features-support`_.
Release [0.4.0]
----------------------
Date: 6/14/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- Added ``int8`` weight storage for `GPT2`_ models.
- Improved prompt context encoding performance for `GPT2`_ models.
- Improved collective communications performance for tp-degrees 4, 8, and 24 on Inf2.
- Improved collective communications performance for tp-degrees 8 and 32 on Trn1.
- Support for the ``--model-type=transformer-inference`` compiler flag for optimized decoder-only LLM inference.
Resolved Issues
~~~~~~~~~~~~~~~
Incorrect `GPT-J`_ ``linear`` layer sharding
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Added padding to the `GPT-J`_ ``linear`` layer to correctly handle odd vocabulary sizes.
Incorrect output with HuggingFace |beam_search|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Issues where the HuggingFace |generate| method produces incorrect results when
|beam_search| is used have been resolved.
Release [0.3.0]
----------------------
Date: 05/01/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- Added ``transformers-neuronx`` artifacts to PyPI repository.
- Added support for the HuggingFace |generate|.
- Added model serialization support for GPT2 models, including model saving, loading, and
weight swapping.
- Added support for caching compiled artifacts.
- Improved performance by removing unnecessary KV-cache tensor resetting.
- Improved prompt context encoding performance (`OPT`_, `GPT2`_).
Resolved Issues
~~~~~~~~~~~~~~~
Incorrect `GPT-J`_ ``amp_callback`` import
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Fixed the `GPT-J`_ demo to import the correct ``amp_callback`` function.
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Incorrect output with HuggingFace |beam_search|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When the HuggingFace |generate| method is configured to use |beam_search|, this
can produce incorrect results for certain configurations. It is recommended to
use other generation methods such as |sample| or |greedy_search|. This will be
fixed in a future Neuron release.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _OPT: https://huggingface.co/docs/transformers/model_doc/opt
.. _GPT2: https://huggingface.co/docs/transformers/model_doc/gpt2
.. _GPT-J: https://huggingface.co/docs/transformers/model_doc/gptj
.. _Tensor-parallelism-support: https://github.com/aws-neuron/transformers-neuronx/blob/main/README.md#tensor-parallelism-support
.. _features-support: https://github.com/aws-neuron/transformers-neuronx/blob/main/README.md#Currently-supported-models-and-features
.. |generate| replace:: :py:meth:`~transformers.generation_utils.GenerationMixin.generate`
.. |beam_search| replace:: :meth:`~transformers.generation_utils.GenerationMixin.beam_search`
.. |sample| replace:: :meth:`~transformers.generation_utils.GenerationMixin.sample`
.. |greedy_search| replace:: :meth:`~transformers.generation_utils.GenerationMixin.greedy_search`
.. |Trn1| replace:: :ref:`Trn1 <aws-trn1-arch>`
.. |Inf2| replace:: :ref:`Inf2 <aws-inf2-arch>`
.. _transformers-neuronx-rn:
Transformers Neuron (``transformers-neuronx``) release notes
============================================================
.. contents:: Table of Contents
:local:
:depth: 1
Transformers Neuron for |Trn1|/|Inf2| is a software package that enables
PyTorch users to perform large language model (LLM) inference on
second-generation Neuron hardware (See: :ref:`NeuronCore-v2 <neuroncores-v2-arch>`).
Model support status
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Definition of model support status
----------------------------------
- Prototype (Alpha): An initial in-development version of a model that should be considered a preview of future functionality. A prototype may not be fully functional. A prototype model is not expected to perform well and may also have known accuracy issues. Prototype models may not maintain compatibility across versions.
- Experimental (Beta): A functional model which may still need performance & accuracy tuning. An experimental model should produce accurate results in most cases but is not yet considered stable. Prototype models may not maintain compatibility across versions.
- Stable: A model which has been validated for both accuracy and performance. Breaking changes to a stable models will occur with a deprecation notice in advance.
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Model Support
- Functional
- Performance Tuned
- Backwards Compatibility
* - Prototype
- No
- No
- No
* - Experimental
- Yes
- No
- No
* - Stable
- Yes
- Yes
- Yes
Current model support status
-----------------------------
- `BLOOM <https://huggingface.co/docs/transformers/model_doc/bloom>`__: [Experimental]
- `GPT2 <https://huggingface.co/docs/transformers/model_doc/gpt2>`__: [Experimental]
- `GPT-J <https://huggingface.co/docs/transformers/model_doc/gptj>`__: [Experimental]
- `GPT-Neox <https://huggingface.co/docs/transformers/model_doc/gpt_neox>`__: [Experimental]
- `LLaMA <https://huggingface.co/docs/transformers/main/model_doc/llama>`__: [Experimental]
- `LLaMA 2 <https://huggingface.co/docs/transformers/main/model_doc/llama2>`__: [Experimental]
- `OPT <https://huggingface.co/docs/transformers/model_doc/opt>`__: [Experimental]
--------------------------
Model features
--------------------------
.. list-table::
:widths: auto
:header-rows: 1
:align: left
* - Model
- Flexible Tensor Parallelism
- Prompt Estimate Support
- Serialization Support
* - BLOOM
- Yes
- Yes
- No
* - GPT2
- Yes
- Partial
- Partial
* - GPT-J
- No
- No
- No
* - GPT-NeoX
- No
- No
- No
* - LLaMA
- Yes
- Yes
- No
* - LLaMA 2
- Yes
- Yes
- No
* - OPT
- Yes
- No
- No
Release [0.7.84]
----------------------
Date: 09/15/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- Use the ``--model-type=transformer`` compiler flag by default for all models. This flag improves performance and compilation time for all models. This flag replaces the ``--model-type=transformer-inference`` flag, which is now depracated.
Resolved Issues
~~~~~~~~~~~~~~~
- Fixed an issue where the ``HuggingFaceGenerationModelAdapter`` class falls back to serial context encoding for models that have parallel context encoding (``GPT2ForSamplingWithContextBroadcasting``, ``LlamaForSampling``, etc.)
- [GPT2 / OPT] Fixed an issue in the parallel context encoding network where incorrect results could be generated due to incorrect masking logic.
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Some configurations of LLaMA and LLaMA-2 inference models fail compilation with the error ``IndirectLoad/Save requires contiguous indirect access per partition``. This is fixed in the compiler version 2.10.0.35 (Neuron SDK 2.14.1).
- Some configurations of LLaMA and LLaMA-2 inference model fail compilation with the error ``Too many instructions after unroll for function sg0000``. To mitigate this, please try with ``-O1`` compiler option (or ``--optlevel 1``) by adding ``os.environ["NEURON_CC_FLAGS"] = "-O1"`` to your script or set in the environment. A complete fix will be coming in the future release which will not require this option. Note: Using -O1 in the LLaMA-2 13B tutorial results in about 50% increase in latency compared to Neuron SDK 2.13.2. If this is not acceptable, please use compiler version from Neuron SDK 2.13.2.
Release [0.6.106]
----------------------
Date: 08/28/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- [Experimental] Added support for LLaMA 2 (excluding grouped/multi-query versions, such as LLaMA 2 70b)
- [Experimental] Improved the performance of BLOOM and LLaMA models
- Reduced execution latency of token generation in tensor parallel models by improving thread synchronization. (supported in LLaMA only)
- Added an optimized vector implementation of RoPE positional embedding. (supported in LLaMA only)
- Added support for faster context encoding on sequences of varying lengths. This is implemented by allowing multiple buckets for parallel context encoding. During inference the best fit bucket is chosen. (supported in LLaMA/GPT-2 only)
- Added the Neuron Persistent Cache for compilation to automatically load pre-compiled model artifacts. (supported by all models)
- Improved compilation time by compiling models used for different sequence length buckets in parallel. (not supported in GPT-NeoX/GPT-J)
Resolved Issues
~~~~~~~~~~~~~~~
- [LLaMA] Fixed an issue in the parallel context encoding network where incorrect results could be generated if the context length is shorter than the context length estimate
- [GPT2 / OPT] Fixed an issue in the parallel context encoding network where incorrect results could be generated
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- The ``HuggingFaceGenerationModelAdapter`` class currently falls back to serial context encoding for models that have parallel context encoding (``GPT2ForSamplingWithContextBroadcasting``, ``LlamaForSampling``, etc. )
- Beam search can introduce memory issues for large models
- There can be accuracy issues for the GPT-J model for certain use-cases
Release [0.5.58]
----------------------
Date: 7/21/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- [Experimental] Added support for GPT-NeoX models.
- [Experimental] Added support for BLOOM models.
- [Prototype] Added support for LLaMA models.
- Added support for more flexible tensor-parallel configurations to GPT2, OPT, and BLOOM. The attention heads doesn't need to be evenly divisible by `tp_degree` anymore. (Note: The `tp_degree` still needs to satisfy the runtime topologies constraint for collective communication (i.e Allreduce). For more details on supported topologies, see: `Tensor-parallelism-support`_ and https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/arch/neuron-features/collective-communication.html.)
- Added multi-query / multi-group attention support for GPT2.
Resolved Issues
~~~~~~~~~~~~~~~
- Fixed NaN issues for GPT2 model.
- Fixed OPT/GPT-NeoX gibberish output.
- Resolved an issue where NaN values could be produced when the context_length argument was used in GPT2/OPT.
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Missing cache reorder support for beam search.
- For more info, please see `features-support`_.
Release [0.4.0]
----------------------
Date: 6/14/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- Added ``int8`` weight storage for `GPT2`_ models.
- Improved prompt context encoding performance for `GPT2`_ models.
- Improved collective communications performance for tp-degrees 4, 8, and 24 on Inf2.
- Improved collective communications performance for tp-degrees 8 and 32 on Trn1.
- Support for the ``--model-type=transformer-inference`` compiler flag for optimized decoder-only LLM inference.
Resolved Issues
~~~~~~~~~~~~~~~
Incorrect `GPT-J`_ ``linear`` layer sharding
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Added padding to the `GPT-J`_ ``linear`` layer to correctly handle odd vocabulary sizes.
Incorrect output with HuggingFace |beam_search|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Issues where the HuggingFace |generate| method produces incorrect results when
|beam_search| is used have been resolved.
Release [0.3.0]
----------------------
Date: 05/01/2023
Summary
~~~~~~~
What's new in this release
~~~~~~~~~~~~~~~~~~~~~~~~~~
- Added ``transformers-neuronx`` artifacts to PyPI repository.
- Added support for the HuggingFace |generate|.
- Added model serialization support for GPT2 models, including model saving, loading, and
weight swapping.
- Added support for caching compiled artifacts.
- Improved performance by removing unnecessary KV-cache tensor resetting.
- Improved prompt context encoding performance (`OPT`_, `GPT2`_).
Resolved Issues
~~~~~~~~~~~~~~~
Incorrect `GPT-J`_ ``amp_callback`` import
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Fixed the `GPT-J`_ demo to import the correct ``amp_callback`` function.
Known Issues and Limitations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Incorrect output with HuggingFace |beam_search|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When the HuggingFace |generate| method is configured to use |beam_search|, this
can produce incorrect results for certain configurations. It is recommended to
use other generation methods such as |sample| or |greedy_search|. This will be
fixed in a future Neuron release.
</pre></body></html>
|
2023-09-29T20:54:56.422Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/transformers-neuronx/transformers-neuronx-developer-guide.rst.txt
|
```
.. _transformers_neuronx_developer_guide:
Transformers Neuron (``transformers-neuronx``) Developer Guide
==============================================================
Transformers Neuron for Trn1 and Inf2 is a software package that enables
PyTorch users to perform large language model (LLM) :ref:`performant inference <neuron_llm_inference>` on
second-generation Neuron hardware (See: :ref:`NeuronCore-v2 <neuroncores-v2-arch>`).The :ref:`Neuron performance page <inf2-performance>` lists expected inference performance for commonly used Large Language Models.
Introduction
------------
The `Transformers Neuron repository <https://github.com/aws-neuron/transformers-neuronx>`_
contains the source code of the AWS Neuron Transformers integration project.
As it stands now, it mainly serves the purpose of
running transformer decoder inference (autoregressive sampling)
workflows on the Neuron platform.
Note: This project is **actively** in development. The Neuron team is
still heavily modifying the Neuron optimized module classes. The
functionality provided in this repository will not maintain long-term
API stability until version >= 1.0.0. For applications willing to reuse
code from this repository, we recommend treating the Neuron optimized
module implementations as samples, and pin the version of the main
library package ``torch-neuronx`` to avoid breaking interface changes as
new features are developed.
Checkpoint compatibility with HuggingFace Transformers
------------------------------------------------------
``transformers-neuronx`` is checkpoint-compatible with HuggingFace
Transformers. While the Neuron team reimplemented some HuggingFace
Transformers models from scratch for the purpose of maximizing the
execution efficiency of transformer decoders on Neuron, the
implementations are done with maximizing compatibility in mind, meaning
one can train transformer decoder models, say GPT2, using the standard
HuggingFace Transformers library, and then construct an
inference-optimized decoder model using transformers-neuronx's
``GPT2ForSampling`` class. If training was done with other libraries
such as MegatronLM, then it is still possible to convert the obtained
checkpoint to the standard HuggingFace Transformers checkpoint format,
and then move on to transformers-neuronx's optimized decoder
implementations.
Neuron optimized transformer decoders implemented in XLA High Level Operations (HLO)
------------------------------------------------------------------------------------
Due to the stateful nature of the autoregressive sampling computation,
an efficient implementation of autoregressive sampling using the Neuron
SDK requires rewriting the model forward function into a pure-function
computation running on fixed-shape tensors. Furthermore, we want the
pure-function computation be implemented in a compiled language so that
the Neuron compiler can perform extensive code analysis and
optimization. We chose XLA High Level Operations (HLO) as the compiled
language for implementing Neuron optimized transformer decoder classes.
The source code of these classes contains Python functions written in a
syntax called "PyHLO", name of a Neuron internal tool for
writing/compiling the HLO language in Python. As an example, a "language
model head" implemented in PyHLO may look like the following.
::
class LmHeadHlo:
...
def lm_head(self, scribe):
dtype = self.dtype
hidden_size = self.hidden_size
n_active_tokens = self.n_active_tokens
batch_size = self.batch_size
vocab_size = self.vocab_size
hidden = dtype[hidden_size, n_active_tokens, batch_size].Parameter(parameter_number=0)
weight = dtype[hidden_size, vocab_size].Parameter(parameter_number=1)
rhs_size = n_active_tokens * batch_size
hidden = dtype[hidden_size, rhs_size].Reshape(hidden)
dot_dims = dict(lhs_contracting_dimensions=[0], rhs_contracting_dimensions=[0])
logits = dtype[vocab_size, rhs_size].Dot(weight, hidden, dot_dimension_numbers=dot_dims)
return dtype[vocab_size, n_active_tokens, batch_size].Reshape(logits)
...
The ``transformers_neuronx.compiler.compile_py_func`` function can
convert the Python ``lm_head`` function into ``HloModuleProto``, a valid
input format for the ``neuronx-cc`` compiler.
Tensor-parallelism support
--------------------------
For transformer decoders used in large language models,
tensor-parallelism is neccessary as it provides a way to shard the
models' large weight matrices onto multiple NeuronCores, and having
NeuronCores working on the same matrix multiply operation
collaboratively. transformers-neuronx's tensor-parallelism support makes
heavy use of collective operations such as all-reduce, which is
supported natively by the Neuron runtime.
There are some principles for setting tensor-parallelism degree (number
of NeuronCores participating in sharded matrix multiply operations) for
Neuron-optimized transformer decoder models.
1. The number of attention heads needs to be divisible by the
tensor-parallelism degree.
2. The total data size of model weights and key-value caches needs to be
smaller than 16 GB times the tensor-parallelism degree.
3. Currently, the Neuron runtime supports tensor-parallelism degrees 1,
2, 8, and 32 on Trn1 and supports tensor-parallelism degrees 1, 2, 4,
8, and 24 on Inf2.
Some examples:
1. ``facebook/opt-13b`` has 40 attention heads, and when running at
batch size 1 and float16 precision the model requires ~29 GB memory,
therefore a ``trn1.2xlarge`` with 32 GB device memory is sufficient.
2. ``facebook/opt-30b`` has 56 attention heads, and at batch size 1 and
float16 precision the model requires ~66 GB memory, therefore it can
run on 8 NeuronCores on one ``trn1.32xlarge`` using 128 GB device
memory.
3. ``gpt2-xl`` has 25 attention heads and requires ~4 GB memory at
bfloat16 precision. It runs without tensor-parallelism only.
Features
--------
------------------------
Hugging Face generate() API support
------------------------
Transformers Neuron models support the Hugging Face `generate() <https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationMixin.generate>`__
API via the ``HuggingFaceGenerationModelAdapter`` adapter class. In the following example we
demonstrate how to run sampling with temperature using the ``GPT2`` model:
.. code-block:: python
from transformers_neuronx.gpt2.model import GPT2ForSampling
from transformers_neuronx.generation_utils import HuggingFaceGenerationModelAdapter
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Create and compile the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='f32', unroll=None)
model_neuron.to_neuron()
# Use the `HuggingFaceGenerationModelAdapter` to access the generate API
model = HuggingFaceGenerationModelAdapter(model_cpu.config, model_neuron)
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = 'left'
text = "Hello, I'm a language model,"
encoded_input = tokenizer(text, return_tensors='pt', padding=True)
# Run inference using temperature
model.reset_generation()
sample_output = model.generate(
input_ids=encoded_input.input_ids,
attention_mask=encoded_input.attention_mask,
do_sample=True,
max_length=256,
temperature=0.7,
)
print([tokenizer.decode(tok) for tok in sample_output])
Note: As the Hugging Face generation API can expand the input's batch dimension
based on different generation configurations, we need to compile the neuron
model with different compile batch_size compared to the run time batch_size
(batch dimension of inputs to generation API).
- if ``do_sample=True``, ``compile_batch_size = runtime_batch_size x num_return_sequences x beam_size``
- otherwise, ``compile_batch_size = runtime_batch_size x num_return_sequences``
------------------------
Neuron Persistent Cache
------------------------
The Neuron Persistent Cache is now enabled for Transformers Neuron by default.
Model artifacts which have been compiled once will be cached and reused on
successive runs when possible. Model artifacts will only be reused when
compiling with the same compiler version (neuronx-cc), model configurations,
and compiler flags. It also includes other features (i.e. using an S3 bucket as
the cache backend). For more defailed information, see the
:ref:`Persistent cache documentation <neuron-caching>`
.. _int8_weight_storage_support:
------------------------
int8 weight storage support
------------------------
Transformers Neuron supports int8 weight storage for the ``GPT2`` model class.
int8 weight storage can be used to reduce memory bandwidth usage to improve
model performace. int8 weight storage support for additional model classes
will be added in an uncoming relesae. In the following example we demonstrate
how to apply int8 weight storage to the ``GPT2`` model via the
``QuantizationConfig`` and ``NeuronConfig`` configs:
.. code-block:: python
import torch
from transformers_neuronx.gpt2.model import GPT2ForSampling
from transformers_neuronx.module import save_pretrained_split
from transformers_neuronx.config import NeuronConfig, QuantizationConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Cast attention and mlp layers to low precisions only; layernorms stay as f32
def amp_callback(model, dtype):
for block in model.transformer.h:
block.attn.to(dtype)
block.mlp.to(dtype)
model.lm_head.to(dtype)
# Load and save the CPU model with bfloat16 casting
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
amp_callback(model_cpu, torch.bfloat16)
save_pretrained_split(model_cpu, 'gpt2-split')
# Set the weight storage config use int8 quantization and bf16 dequantization
neuron_config = NeuronConfig(
quant=QuantizationConfig(quant_dtype='s8', dequant_dtype='bf16'),
)
# Create and compile the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='bf16', neuron_config=neuron_config)
model_neuron.to_neuron()
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a language model,"
encoded_input = tokenizer(text, return_tensors='pt')
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
print([tokenizer.decode(tok) for tok in generated_sequence])
------------------------
Parallel Input Prompt Context Encoding
------------------------
Transformers Neuron supports parallel input prompt context encoding for the ``GPT2``
model class. Parallel context encoding can be used to significantly reduce
the latency of the input prompt context encoding before the autoregressive
decoder token generation loop. Parallel context encoding support for additional
model classes will be added in an uncoming release.
The ``GPT2ForSamplingWithContextBroadcasting`` class has a ``context_length_estimate``
variable that determines the number of input prompt tokens that will be processed in
parallel. For optimal results, this should be set to a power of 2 that is
closest to the most frequently seen input prompt length.
In the following example we demonstrate how to apply parallel context encoding
to the ``GPT2`` model via the ``GPT2ForSamplingWithContextBroadcasting`` class.
In this example, we set the ``context_length_estimate`` to be 128, which is
the closest power of 2 the length of the input prompt (97 tokens).
.. code-block:: python
import math
import torch
from transformers_neuronx.gpt2.model import GPT2ForSamplingWithContextBroadcasting
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model with bfloat16 casting
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a generative AI language model. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It is powered by large models that are pre-trained on vast amounts of data and commonly referred to as foundation models (FMs). With generative AI on AWS, you can reinvent your applications, create entirely new customer experiences, drive unprecedented levels of productivity, and transform your business. "
encoded_input = tokenizer(text, return_tensors='pt')
# Set the number of tokens that will be processed in parallel
prompt_len = encoded_input.input_ids.shape[1]
context_length_estimate = int(2 ** math.ceil(math.log(prompt_len, 2))) # Use the closest power of two bucket size
# Create and compile the Neuron model
model_neuron = GPT2ForSamplingWithContextBroadcasting.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='bf16', context_length_estimate=context_length_estimate)
model_neuron.to_neuron()
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
print([tokenizer.decode(tok) for tok in generated_sequence])
The ``GPT2ForSamplingWithContextBroadcasting`` class can also process
an input prompt that has a different batch size from the batch size of the
autoregressive decoder output. For example, an input prompt with batch size = 1 can
be used to produce an output of batch size = 5 to generate multiple suggestions
for the same input prompt. The input prompt batch size can be specified using
the ``prompt_batch_size`` argument and the autoregressive decoder output batch
size can be specified using the ``batch_size`` argument. In the following example
we demonstrate how to apply parallel context encoding to the ``GPT2`` model
to generate 5 outputs for a single input.
.. code-block:: python
import math
import torch
from transformers_neuronx.gpt2.model import GPT2ForSamplingWithContextBroadcasting
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model with bfloat16 casting
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a generative AI language model. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It is powered by large models that are pre-trained on vast amounts of data and commonly referred to as foundation models (FMs). With generative AI on AWS, you can reinvent your applications, create entirely new customer experiences, drive unprecedented levels of productivity, and transform your business. "
encoded_input = tokenizer(text, return_tensors='pt')
# Set the number of tokens that will be processed in parallel
prompt_len = encoded_input.input_ids.shape[1]
context_length_estimate = int(2 ** math.ceil(math.log(prompt_len, 2))) # Use the closest power of two bucket size
# Create and compile the Neuron model
model_neuron = GPT2ForSamplingWithContextBroadcasting.from_pretrained('gpt2-split', prompt_batch_size=1, batch_size=5, tp_degree=2, n_positions=256, amp='bf16', context_length_estimate=context_length_estimate)
model_neuron.to_neuron()
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
for i, output in enumerate(generated_sequence):
print('-'*50)
print(f'Batch {i} output:')
print(tokenizer.decode(output))
------------------------
[Experimental] Serialization support
------------------------
Transformers Neuron supports model serialization (model saving and loading) for
the ``GPT2`` model class. Serialization support for additional model classes
will be added in an uncoming relesae. In the following example we demonstrate
how to save and load the ``GPT2`` model:
.. code-block:: python
import torch
from transformers_neuronx.gpt2.model import GPT2ForSampling
from transformers_neuronx.generation_utils import HuggingFaceGenerationModelAdapter
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Create and compile the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='f32', unroll=None)
model_neuron.to_neuron()
# Save the compiled Neuron model
model_neuron._save_compiled_artifacts('gpt2-neuron')
# Load the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='f32', unroll=None)
model_neuron._load_compiled_artifacts('gpt2-neuron') # Load the compiled Neuron artifacts
model_neuron.to_neuron() # Load the model weights but skip compilation
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a language model,"
encoded_input = tokenizer(text, return_tensors='pt')
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
print([tokenizer.decode(tok) for tok in generated_sequence])
--------------------------------------
Running inference with multiple models
--------------------------------------
Multiple transformers-neuronx models can be loaded at the same time as long
as the total number of consumed NeuronCores is less than or equal to the total
number of NeuronCores on the instance. For example, three tp-degree=8 models can be
loaded and run in parallel on an inf2.48xlarge which has 24 NeuronCores. The
``NEURON_RT_NUM_CORES`` and ``NEURON_RT_VISIBLE_CORES`` environment variables
can be used to allocate the necessary number of NeuronCores to each process
to run multiple transformers-neuronx models in parallel. See the
:ref:`torch_neuronx_core_placement_guide` section for additional information
about how to use these environment variables.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _transformers_neuronx_developer_guide:
Transformers Neuron (``transformers-neuronx``) Developer Guide
==============================================================
Transformers Neuron for Trn1 and Inf2 is a software package that enables
PyTorch users to perform large language model (LLM) :ref:`performant inference <neuron_llm_inference>` on
second-generation Neuron hardware (See: :ref:`NeuronCore-v2 <neuroncores-v2-arch>`).The :ref:`Neuron performance page <inf2-performance>` lists expected inference performance for commonly used Large Language Models.
Introduction
------------
The `Transformers Neuron repository <https://github.com/aws-neuron/transformers-neuronx>`_
contains the source code of the AWS Neuron Transformers integration project.
As it stands now, it mainly serves the purpose of
running transformer decoder inference (autoregressive sampling)
workflows on the Neuron platform.
Note: This project is **actively** in development. The Neuron team is
still heavily modifying the Neuron optimized module classes. The
functionality provided in this repository will not maintain long-term
API stability until version >= 1.0.0. For applications willing to reuse
code from this repository, we recommend treating the Neuron optimized
module implementations as samples, and pin the version of the main
library package ``torch-neuronx`` to avoid breaking interface changes as
new features are developed.
Checkpoint compatibility with HuggingFace Transformers
------------------------------------------------------
``transformers-neuronx`` is checkpoint-compatible with HuggingFace
Transformers. While the Neuron team reimplemented some HuggingFace
Transformers models from scratch for the purpose of maximizing the
execution efficiency of transformer decoders on Neuron, the
implementations are done with maximizing compatibility in mind, meaning
one can train transformer decoder models, say GPT2, using the standard
HuggingFace Transformers library, and then construct an
inference-optimized decoder model using transformers-neuronx's
``GPT2ForSampling`` class. If training was done with other libraries
such as MegatronLM, then it is still possible to convert the obtained
checkpoint to the standard HuggingFace Transformers checkpoint format,
and then move on to transformers-neuronx's optimized decoder
implementations.
Neuron optimized transformer decoders implemented in XLA High Level Operations (HLO)
------------------------------------------------------------------------------------
Due to the stateful nature of the autoregressive sampling computation,
an efficient implementation of autoregressive sampling using the Neuron
SDK requires rewriting the model forward function into a pure-function
computation running on fixed-shape tensors. Furthermore, we want the
pure-function computation be implemented in a compiled language so that
the Neuron compiler can perform extensive code analysis and
optimization. We chose XLA High Level Operations (HLO) as the compiled
language for implementing Neuron optimized transformer decoder classes.
The source code of these classes contains Python functions written in a
syntax called "PyHLO", name of a Neuron internal tool for
writing/compiling the HLO language in Python. As an example, a "language
model head" implemented in PyHLO may look like the following.
::
class LmHeadHlo:
...
def lm_head(self, scribe):
dtype = self.dtype
hidden_size = self.hidden_size
n_active_tokens = self.n_active_tokens
batch_size = self.batch_size
vocab_size = self.vocab_size
hidden = dtype[hidden_size, n_active_tokens, batch_size].Parameter(parameter_number=0)
weight = dtype[hidden_size, vocab_size].Parameter(parameter_number=1)
rhs_size = n_active_tokens * batch_size
hidden = dtype[hidden_size, rhs_size].Reshape(hidden)
dot_dims = dict(lhs_contracting_dimensions=[0], rhs_contracting_dimensions=[0])
logits = dtype[vocab_size, rhs_size].Dot(weight, hidden, dot_dimension_numbers=dot_dims)
return dtype[vocab_size, n_active_tokens, batch_size].Reshape(logits)
...
The ``transformers_neuronx.compiler.compile_py_func`` function can
convert the Python ``lm_head`` function into ``HloModuleProto``, a valid
input format for the ``neuronx-cc`` compiler.
Tensor-parallelism support
--------------------------
For transformer decoders used in large language models,
tensor-parallelism is neccessary as it provides a way to shard the
models' large weight matrices onto multiple NeuronCores, and having
NeuronCores working on the same matrix multiply operation
collaboratively. transformers-neuronx's tensor-parallelism support makes
heavy use of collective operations such as all-reduce, which is
supported natively by the Neuron runtime.
There are some principles for setting tensor-parallelism degree (number
of NeuronCores participating in sharded matrix multiply operations) for
Neuron-optimized transformer decoder models.
1. The number of attention heads needs to be divisible by the
tensor-parallelism degree.
2. The total data size of model weights and key-value caches needs to be
smaller than 16 GB times the tensor-parallelism degree.
3. Currently, the Neuron runtime supports tensor-parallelism degrees 1,
2, 8, and 32 on Trn1 and supports tensor-parallelism degrees 1, 2, 4,
8, and 24 on Inf2.
Some examples:
1. ``facebook/opt-13b`` has 40 attention heads, and when running at
batch size 1 and float16 precision the model requires ~29 GB memory,
therefore a ``trn1.2xlarge`` with 32 GB device memory is sufficient.
2. ``facebook/opt-30b`` has 56 attention heads, and at batch size 1 and
float16 precision the model requires ~66 GB memory, therefore it can
run on 8 NeuronCores on one ``trn1.32xlarge`` using 128 GB device
memory.
3. ``gpt2-xl`` has 25 attention heads and requires ~4 GB memory at
bfloat16 precision. It runs without tensor-parallelism only.
Features
--------
------------------------
Hugging Face generate() API support
------------------------
Transformers Neuron models support the Hugging Face `generate() <https://huggingface.co/docs/transformers/v4.28.1/en/main_classes/text_generation#transformers.GenerationMixin.generate>`__
API via the ``HuggingFaceGenerationModelAdapter`` adapter class. In the following example we
demonstrate how to run sampling with temperature using the ``GPT2`` model:
.. code-block:: python
from transformers_neuronx.gpt2.model import GPT2ForSampling
from transformers_neuronx.generation_utils import HuggingFaceGenerationModelAdapter
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Create and compile the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='f32', unroll=None)
model_neuron.to_neuron()
# Use the `HuggingFaceGenerationModelAdapter` to access the generate API
model = HuggingFaceGenerationModelAdapter(model_cpu.config, model_neuron)
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.padding_side = 'left'
text = "Hello, I'm a language model,"
encoded_input = tokenizer(text, return_tensors='pt', padding=True)
# Run inference using temperature
model.reset_generation()
sample_output = model.generate(
input_ids=encoded_input.input_ids,
attention_mask=encoded_input.attention_mask,
do_sample=True,
max_length=256,
temperature=0.7,
)
print([tokenizer.decode(tok) for tok in sample_output])
Note: As the Hugging Face generation API can expand the input's batch dimension
based on different generation configurations, we need to compile the neuron
model with different compile batch_size compared to the run time batch_size
(batch dimension of inputs to generation API).
- if ``do_sample=True``, ``compile_batch_size = runtime_batch_size x num_return_sequences x beam_size``
- otherwise, ``compile_batch_size = runtime_batch_size x num_return_sequences``
------------------------
Neuron Persistent Cache
------------------------
The Neuron Persistent Cache is now enabled for Transformers Neuron by default.
Model artifacts which have been compiled once will be cached and reused on
successive runs when possible. Model artifacts will only be reused when
compiling with the same compiler version (neuronx-cc), model configurations,
and compiler flags. It also includes other features (i.e. using an S3 bucket as
the cache backend). For more defailed information, see the
:ref:`Persistent cache documentation <neuron-caching>`
.. _int8_weight_storage_support:
------------------------
int8 weight storage support
------------------------
Transformers Neuron supports int8 weight storage for the ``GPT2`` model class.
int8 weight storage can be used to reduce memory bandwidth usage to improve
model performace. int8 weight storage support for additional model classes
will be added in an uncoming relesae. In the following example we demonstrate
how to apply int8 weight storage to the ``GPT2`` model via the
``QuantizationConfig`` and ``NeuronConfig`` configs:
.. code-block:: python
import torch
from transformers_neuronx.gpt2.model import GPT2ForSampling
from transformers_neuronx.module import save_pretrained_split
from transformers_neuronx.config import NeuronConfig, QuantizationConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Cast attention and mlp layers to low precisions only; layernorms stay as f32
def amp_callback(model, dtype):
for block in model.transformer.h:
block.attn.to(dtype)
block.mlp.to(dtype)
model.lm_head.to(dtype)
# Load and save the CPU model with bfloat16 casting
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
amp_callback(model_cpu, torch.bfloat16)
save_pretrained_split(model_cpu, 'gpt2-split')
# Set the weight storage config use int8 quantization and bf16 dequantization
neuron_config = NeuronConfig(
quant=QuantizationConfig(quant_dtype='s8', dequant_dtype='bf16'),
)
# Create and compile the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='bf16', neuron_config=neuron_config)
model_neuron.to_neuron()
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a language model,"
encoded_input = tokenizer(text, return_tensors='pt')
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
print([tokenizer.decode(tok) for tok in generated_sequence])
------------------------
Parallel Input Prompt Context Encoding
------------------------
Transformers Neuron supports parallel input prompt context encoding for the ``GPT2``
model class. Parallel context encoding can be used to significantly reduce
the latency of the input prompt context encoding before the autoregressive
decoder token generation loop. Parallel context encoding support for additional
model classes will be added in an uncoming release.
The ``GPT2ForSamplingWithContextBroadcasting`` class has a ``context_length_estimate``
variable that determines the number of input prompt tokens that will be processed in
parallel. For optimal results, this should be set to a power of 2 that is
closest to the most frequently seen input prompt length.
In the following example we demonstrate how to apply parallel context encoding
to the ``GPT2`` model via the ``GPT2ForSamplingWithContextBroadcasting`` class.
In this example, we set the ``context_length_estimate`` to be 128, which is
the closest power of 2 the length of the input prompt (97 tokens).
.. code-block:: python
import math
import torch
from transformers_neuronx.gpt2.model import GPT2ForSamplingWithContextBroadcasting
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model with bfloat16 casting
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a generative AI language model. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It is powered by large models that are pre-trained on vast amounts of data and commonly referred to as foundation models (FMs). With generative AI on AWS, you can reinvent your applications, create entirely new customer experiences, drive unprecedented levels of productivity, and transform your business. "
encoded_input = tokenizer(text, return_tensors='pt')
# Set the number of tokens that will be processed in parallel
prompt_len = encoded_input.input_ids.shape[1]
context_length_estimate = int(2 ** math.ceil(math.log(prompt_len, 2))) # Use the closest power of two bucket size
# Create and compile the Neuron model
model_neuron = GPT2ForSamplingWithContextBroadcasting.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='bf16', context_length_estimate=context_length_estimate)
model_neuron.to_neuron()
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
print([tokenizer.decode(tok) for tok in generated_sequence])
The ``GPT2ForSamplingWithContextBroadcasting`` class can also process
an input prompt that has a different batch size from the batch size of the
autoregressive decoder output. For example, an input prompt with batch size = 1 can
be used to produce an output of batch size = 5 to generate multiple suggestions
for the same input prompt. The input prompt batch size can be specified using
the ``prompt_batch_size`` argument and the autoregressive decoder output batch
size can be specified using the ``batch_size`` argument. In the following example
we demonstrate how to apply parallel context encoding to the ``GPT2`` model
to generate 5 outputs for a single input.
.. code-block:: python
import math
import torch
from transformers_neuronx.gpt2.model import GPT2ForSamplingWithContextBroadcasting
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model with bfloat16 casting
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a generative AI language model. Generative AI is a type of AI that can create new content and ideas, including conversations, stories, images, videos, and music. It is powered by large models that are pre-trained on vast amounts of data and commonly referred to as foundation models (FMs). With generative AI on AWS, you can reinvent your applications, create entirely new customer experiences, drive unprecedented levels of productivity, and transform your business. "
encoded_input = tokenizer(text, return_tensors='pt')
# Set the number of tokens that will be processed in parallel
prompt_len = encoded_input.input_ids.shape[1]
context_length_estimate = int(2 ** math.ceil(math.log(prompt_len, 2))) # Use the closest power of two bucket size
# Create and compile the Neuron model
model_neuron = GPT2ForSamplingWithContextBroadcasting.from_pretrained('gpt2-split', prompt_batch_size=1, batch_size=5, tp_degree=2, n_positions=256, amp='bf16', context_length_estimate=context_length_estimate)
model_neuron.to_neuron()
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
for i, output in enumerate(generated_sequence):
print('-'*50)
print(f'Batch {i} output:')
print(tokenizer.decode(output))
------------------------
[Experimental] Serialization support
------------------------
Transformers Neuron supports model serialization (model saving and loading) for
the ``GPT2`` model class. Serialization support for additional model classes
will be added in an uncoming relesae. In the following example we demonstrate
how to save and load the ``GPT2`` model:
.. code-block:: python
import torch
from transformers_neuronx.gpt2.model import GPT2ForSampling
from transformers_neuronx.generation_utils import HuggingFaceGenerationModelAdapter
from transformers_neuronx.module import save_pretrained_split
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load and save the CPU model
model_cpu = AutoModelForCausalLM.from_pretrained('gpt2')
save_pretrained_split(model_cpu, 'gpt2-split')
# Create and compile the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='f32', unroll=None)
model_neuron.to_neuron()
# Save the compiled Neuron model
model_neuron._save_compiled_artifacts('gpt2-neuron')
# Load the Neuron model
model_neuron = GPT2ForSampling.from_pretrained('gpt2-split', batch_size=1, tp_degree=2, n_positions=256, amp='f32', unroll=None)
model_neuron._load_compiled_artifacts('gpt2-neuron') # Load the compiled Neuron artifacts
model_neuron.to_neuron() # Load the model weights but skip compilation
# Get a tokenizer and exaple input
tokenizer = AutoTokenizer.from_pretrained('gpt2')
text = "Hello, I'm a language model,"
encoded_input = tokenizer(text, return_tensors='pt')
# Run inference
with torch.inference_mode():
generated_sequence = model_neuron.sample(encoded_input.input_ids, sequence_length=256, start_ids=None)
print([tokenizer.decode(tok) for tok in generated_sequence])
--------------------------------------
Running inference with multiple models
--------------------------------------
Multiple transformers-neuronx models can be loaded at the same time as long
as the total number of consumed NeuronCores is less than or equal to the total
number of NeuronCores on the instance. For example, three tp-degree=8 models can be
loaded and run in parallel on an inf2.48xlarge which has 24 NeuronCores. The
``NEURON_RT_NUM_CORES`` and ``NEURON_RT_VISIBLE_CORES`` environment variables
can be used to allocate the necessary number of NeuronCores to each process
to run multiple transformers-neuronx models in parallel. See the
:ref:`torch_neuronx_core_placement_guide` section for additional information
about how to use these environment variables.
</pre></body></html>
|
2023-09-29T20:54:56.435Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/api-reference-guide.rst.txt
|
```
.. _neuronx_distributed_api_guide
App Reference Guide (``neuronx-distributed`` )
====================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/neuronx-distributed/api_guide
.. include:: /libraries/neuronx-distributed/api-reference-guide.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx_distributed_api_guide
App Reference Guide (``neuronx-distributed`` )
====================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/neuronx-distributed/api_guide
.. include:: /libraries/neuronx-distributed/api-reference-guide.txt</pre></body></html>
|
2023-09-29T20:54:56.444Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tutorials/training-gpt-neox.rst.txt
|
```
.. _gpt_neox_tp_zero1_tutorial:
Training GPT-NeoX 6.9B with Tensor Parallelism and ZeRO-1 Optimizer (``neuronx-distributed`` )
=========================================================================================
In this section, we showcase to pretrain a GPT-NeoX 6.9B model by using tensor parallelism
and zero-1 optimzer in the ``neuronx-distributed`` package. Please refer to the `Neuron Samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain>`__ to view the files in this tutorial.
**Setting up environment:**
For this experiment, we will use a ParallelCluster with at least four trn1-32xl compute nodes.
`Train your model on ParallelCluster <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/devflows/training/parallelcluster/parallelcluster-training.html>`__
introduces how to setup and use a ParallelCluster.
We need first to create and activate a python virtual env on the head node of the ParallelCluster.
Next follow the instructions mentioned here:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>` to install neuron python packages.
We also need to install the ``neuronx-distributed`` package using the following command:
.. code:: ipython3
python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com
Let’s download the scripts for pretraining.
.. code:: ipython3
mkdir -p ~/examples/tp_dp_gpt_neox_hf_pretrain
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain.sh
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/adamw_fp32_optim_params.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/get_dataset.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/requirements.txt
python3 -m pip install -r requirements.txt
Next let’s download and pre-process the dataset:
.. code:: ipython3
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
python3 get_dataset.py
At this point, you are all set to start training.
**Running training**
We first pre-compile the graphs using the ``neuron_parallel_compile``.
Suppose the cluster quene name is ``compute1-dy-training-0`` and we are using node 1-4,
let’s run the command below:
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun neuron_parallel_compile bash $(pwd)/tp_dp_gpt_neox_6.9b_hf_pretrain.sh"
This script uses a tensor-parallel size of 8.
This will automatically set the zero-1 sharding degree to 16 (4 * 32 workers / tensor_parallel_size).
Once the graphs are compiled we can now run training and observe our loss goes down.
To run the training, we just the above command but without ``neuron_parallel_compile``.
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun bash $(pwd)/tp_dp_gpt_neox_6.9b_hf_pretrain.sh"
**ZeRO-1 Optimizer**
The training script uses ZeRO-1 optimizer, where the optimizer states are partitioned across
the ranks so that each rank updates only its partition.
Below shows the code snippet of using ZeRO-1 optimizer in training script:
.. code:: ipython3
from neuronx_distributed.optimizer import NeuronZero1Optimizer
optimizer = NeuronZero1Optimizer(
optimizer_grouped_parameters,
AdamW_FP32OptimParams,
lr=flags.lr,
pin_layout=False,
sharding_groups=parallel_state.get_data_parallel_group(as_list=True),
grad_norm_groups=parallel_state.get_tensor_model_parallel_group(as_list=True),
)
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _gpt_neox_tp_zero1_tutorial:
Training GPT-NeoX 6.9B with Tensor Parallelism and ZeRO-1 Optimizer (``neuronx-distributed`` )
=========================================================================================
In this section, we showcase to pretrain a GPT-NeoX 6.9B model by using tensor parallelism
and zero-1 optimzer in the ``neuronx-distributed`` package. Please refer to the `Neuron Samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain>`__ to view the files in this tutorial.
**Setting up environment:**
For this experiment, we will use a ParallelCluster with at least four trn1-32xl compute nodes.
`Train your model on ParallelCluster <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/devflows/training/parallelcluster/parallelcluster-training.html>`__
introduces how to setup and use a ParallelCluster.
We need first to create and activate a python virtual env on the head node of the ParallelCluster.
Next follow the instructions mentioned here:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>` to install neuron python packages.
We also need to install the ``neuronx-distributed`` package using the following command:
.. code:: ipython3
python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com
Let’s download the scripts for pretraining.
.. code:: ipython3
mkdir -p ~/examples/tp_dp_gpt_neox_hf_pretrain
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain/tp_dp_gpt_neox_6.9b_hf_pretrain.sh
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/adamw_fp32_optim_params.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/get_dataset.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/requirements.txt
python3 -m pip install -r requirements.txt
Next let’s download and pre-process the dataset:
.. code:: ipython3
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
python3 get_dataset.py
At this point, you are all set to start training.
**Running training**
We first pre-compile the graphs using the ``neuron_parallel_compile``.
Suppose the cluster quene name is ``compute1-dy-training-0`` and we are using node 1-4,
let’s run the command below:
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun neuron_parallel_compile bash $(pwd)/tp_dp_gpt_neox_6.9b_hf_pretrain.sh"
This script uses a tensor-parallel size of 8.
This will automatically set the zero-1 sharding degree to 16 (4 * 32 workers / tensor_parallel_size).
Once the graphs are compiled we can now run training and observe our loss goes down.
To run the training, we just the above command but without ``neuron_parallel_compile``.
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun bash $(pwd)/tp_dp_gpt_neox_6.9b_hf_pretrain.sh"
**ZeRO-1 Optimizer**
The training script uses ZeRO-1 optimizer, where the optimizer states are partitioned across
the ranks so that each rank updates only its partition.
Below shows the code snippet of using ZeRO-1 optimizer in training script:
.. code:: ipython3
from neuronx_distributed.optimizer import NeuronZero1Optimizer
optimizer = NeuronZero1Optimizer(
optimizer_grouped_parameters,
AdamW_FP32OptimParams,
lr=flags.lr,
pin_layout=False,
sharding_groups=parallel_state.get_data_parallel_group(as_list=True),
grad_norm_groups=parallel_state.get_tensor_model_parallel_group(as_list=True),
)
</pre></body></html>
|
2023-09-29T20:54:56.479Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/neuronx-distributed-misc.rst.txt
|
```
Misc (``neuronx-distributed``)
===============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/neuronx-distributed/neuronx-distributed
.. include:: /libraries/neuronx-distributed/neuronx-distributed-misc.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (``neuronx-distributed``)
===============================
.. toctree::
:maxdepth: 1
:hidden:
/release-notes/neuronx-distributed/neuronx-distributed
.. include:: /libraries/neuronx-distributed/neuronx-distributed-misc.txt</pre></body></html>
|
2023-09-29T20:54:56.511Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/app_notes.rst.txt
|
```
.. _neuronx_distributed_appnotes:
App Notes (``neuronx-distributed`` )
====================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/neuronx-distributed/tensor_parallelism_overview
.. include:: /libraries/neuronx-distributed/app_notes.txt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx_distributed_appnotes:
App Notes (``neuronx-distributed`` )
====================================
.. toctree::
:maxdepth: 1
:hidden:
/libraries/neuronx-distributed/tensor_parallelism_overview
.. include:: /libraries/neuronx-distributed/app_notes.txt</pre></body></html>
|
2023-09-29T20:54:56.533Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/neuronx-distributed/neuronx-distributed.rst.txt
|
```
.. _neuronx-distributed-rn:
Neuron Distributed Release Notes (``neuronx-distributed``)
==========================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for Neuronx-Distributed library.
Neuron Distributed [0.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 9/15/2023
New in this release
-------------------
* Added API for padding attention heads when they are not divisible by tensor-parallel degree
* Added a constant threadpool for distributed inference
* Fixed a bug with padding_idx in ParallelEmbedding layer
* Fixed an issue with checkpoint loading to take into account the stride parameter in tensor parallel layers
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
Neuron Distributed [0.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 8/28/2023
New in this release
-------------------
* Added Zero1 Optimizer support that works with tensor-parallelism
* Added support for sequence-parallel that works with tensor-parallelism
* Added IO aliasing feature in parallel_trace api, which can allow marking certains tensors as state tensors
* Fixed hangs when tracing models using parallel_trace for higher TP degree
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
Neuron Distributed [0.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
New in this release
-------------------
* Added parallel cross entropy loss function.
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
Date: 6/14/2023
New in this release
-------------------
* Releasing the Neuron Distributed (``neuronx-distributed``) library for enabling large language model training/inference.
* Added support for tensor-parallelism training/inference.
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-distributed-rn:
Neuron Distributed Release Notes (``neuronx-distributed``)
==========================================================
.. contents:: Table of contents
:local:
:depth: 1
This document lists the release notes for Neuronx-Distributed library.
Neuron Distributed [0.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 9/15/2023
New in this release
-------------------
* Added API for padding attention heads when they are not divisible by tensor-parallel degree
* Added a constant threadpool for distributed inference
* Fixed a bug with padding_idx in ParallelEmbedding layer
* Fixed an issue with checkpoint loading to take into account the stride parameter in tensor parallel layers
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
Neuron Distributed [0.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 8/28/2023
New in this release
-------------------
* Added Zero1 Optimizer support that works with tensor-parallelism
* Added support for sequence-parallel that works with tensor-parallelism
* Added IO aliasing feature in parallel_trace api, which can allow marking certains tensors as state tensors
* Fixed hangs when tracing models using parallel_trace for higher TP degree
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
Neuron Distributed [0.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
New in this release
-------------------
* Added parallel cross entropy loss function.
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
Date: 6/14/2023
New in this release
-------------------
* Releasing the Neuron Distributed (``neuronx-distributed``) library for enabling large language model training/inference.
* Added support for tensor-parallelism training/inference.
Known Issues and Limitations
----------------------------
* Currently the model checkpointing saves a sharded checkpoint, and users have to write a script to combine the shards.
</pre></body></html>
|
2023-09-29T20:54:56.556Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.ipynb.txt
|
```
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# T5 inference with Tensor Parallelism"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is an extension to the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html). Here we will use NeuronxDistributed to improve the inference performance using tensor parallelism.\n",
"\n",
"This tutorial has the following main sections:\n",
"\n",
"1. Install dependencies\n",
"1. Plug in `NeuronxDistributed` layers into T5\n",
"1. Compile the T5 model\n",
"1. Run distributed infernece with beam search \n",
"\n",
"This Jupyter notebook should be run on a Inf2 instance (`inf2.24xlarge`) or Trn1 isntance (`trn1.32xlarge`)\n",
"\n",
"\n",
"> Do note that flan-t5 models do not work with the code in this tutorial. We are working on fixing that. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install dependencies\n",
"\n",
"The code in this tutorial is written for Jupyter Notebooks. To use Jupyter Notebook on the Neuron instance, you\n",
"can use this [guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html).\n",
"\n",
"It is recommended to go through the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html) before you start this tutorial. \n",
"In addition to the dependencies in the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html), we need to install neuronx-distributed. \n",
"\n",
"This tutorial requires the following pip packages:\n",
"\n",
"- `torch-neuronx`\n",
"- `neuronx-cc`\n",
"- `transformers`\n",
"- `optimum-neuron`\n",
"- `neuronx-distributed`\n",
"\n",
"Most of these packages will be installed when configuring your environment using the Trn1/Inf2 setup guide. The additional dependencies must be installed here:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! pip install --upgrade transformers==4.31.0 optimum-neuron==0.0.8 neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plug in NeuronxDistributed layers into T5\n",
"\n",
"We extend the huggingface's T5 model to use the `NeuronxDistributed` parallel layers. To do so, we simply swap linear layers in `T5LayerSelfAttention`, `T5LayerCrossAttention`, and `T5LayerFF` definitions with `ColumnParallelLinear` and `RowParallelLinear`. We also need to swap the `Embedding` layer with `ParallelEmbedding`.\n",
"\n",
"Let us take the example of T5Attention. The [attention block](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L363-L366) has q, k, v, and o linear layers. \n",
"The multi-head attention block uses q, k and v to compute the attention scores. The attention scores are then passed through o to compute the attention block output. \n",
"So let us swap q, k and v layers with `ColumnParallelLinear` and o with `RowParallelLinear`. Having `RowParallelLinear` following a `ColumnParallelLinear` is a performance optimization. The attention scores computed with q, k and v are already split across Neuron devices. The row parallel layer can use this shared output directly. \n",
"The embedding layer is simply swapped with the `ParallelEmbedding`.\n",
"\n",
"```\n",
"class ParallelAttention(T5Attention):\n",
" def __init__(self, config: T5Config, has_relative_attention_bias=False):\n",
" super().__init__(config, has_relative_attention_bias)\n",
" # Per attention head and per partition values\n",
" world_size = parallel_state.get_tensor_model_parallel_size()\n",
" self.num_attention_heads_per_partition = divide(self.n_heads, world_size)\n",
" self.hidden_size_per_partition = self.num_attention_heads_per_partition * self.key_value_proj_dim\n",
"\n",
" # Mesh TensorFlow initialization to avoid scaling before softmax\n",
" self.q = ColumnParallelLinear(self.d_model,\n",
" self.inner_dim,\n",
" bias=False,\n",
" gather_output=False)\n",
" self.k = ColumnParallelLinear(self.d_model,\n",
" self.inner_dim,\n",
" bias=False,\n",
" gather_output=False)\n",
" self.v = ColumnParallelLinear(self.d_model,\n",
" self.inner_dim,\n",
" bias=False,\n",
" gather_output=False)\n",
" self.o = RowParallelLinear(self.inner_dim,\n",
" self.d_model,\n",
" bias=False,\n",
" input_is_parallel=True)\n",
"\n",
" if self.has_relative_attention_bias:\n",
" self.relative_attention_bias = ParallelEmbedding(self.relative_attention_num_buckets, self.n_heads)\n",
" self.n_heads = self.num_attention_heads_per_partition\n",
"...\n",
"```\n",
"\n",
"You can find the all modified T5 layers defined in [t5_model_layers.py](https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/examples/pytorch/neuronx_distributed/t5-inference/t5_model_layers.py). \n",
"\n",
"\n",
"Once we have the modified T5 layers, we can plug in the T5Attention and T5LayerFF into the pretrained model. Here is how you do that. \n",
"\n",
"```\n",
"def load_pretrained_with_parallel_attn(model_name):\n",
" \n",
" model = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=\"auto\")\n",
"\n",
" # Parallel implementation of Attention modules.\n",
" from t5_model_layers import ParallelSelfAttention, ParallelFF, ParallelCrossAttention\n",
"\n",
" for index, block in enumerate(model.decoder.block):\n",
" if index == 0:\n",
" block.layer[0] = ParallelSelfAttention(model.config,\n",
" has_relative_attention_bias=True)\n",
" else:\n",
" block.layer[0] = ParallelSelfAttention(model.config)\n",
" block.layer[1] = ParallelCrossAttention(model.config)\n",
" block.layer[2] = ParallelFF(model.config)\n",
" # Load the weights into the parallel layers \n",
" neuronx_distributed.parallel_layers.load(model_name + \".pt\", model, sharded=False)\n",
"\n",
" return model\n",
"\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compile the parallel T5 model\n",
"\n",
"Let us set some model parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"t5-3b\"\n",
"max_length = 128\n",
"num_beams = 4\n",
"tp_degree = 8 # tensor parallelism degree"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Download and save the model that we want to trace. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from transformers import T5ForConditionalGeneration\n",
"\n",
"model = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=\"auto\")\n",
"torch.save({\"model\":model.state_dict()}, model_name + \".pt\")\n",
"model.config.use_cache = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To run HuggingFace T5 models on Neuron, we need to make a couple of changes. Let us reuse the code from the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html) which makes T5 compatible with Neuron. For your convenience, the code copied into [wrapper.py](https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/examples/pytorch/neuronx_distributed/t5-inference/wrapper.py) and [t5_models.py](https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/examples/pytorch/neuronx_distributed/t5-inference/t5_models.py). This notebook will import these files. \n",
"\n",
"The only change made to this code is that we use `neuronx_distributed.trace` instead of `torch_neuronx.trace`. \n",
"\n",
"Let us trace the encoder and decoder. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import t5_models \n",
"import neuronx_distributed\n",
"import time \n",
"\n",
"# This can take up to 20 minutes\n",
"encoder_compile_start_time = time.time()\n",
"traced_encoder = t5_models.parallel_trace_encoder(model_name, max_length, num_beams, tp_degree)\n",
"print(\"Encoder compilation time {}\".format(time.time() - encoder_compile_start_time))\n",
"\n",
"neuronx_distributed.trace.parallel_model_save(traced_encoder, \"TracedParallelEncoder.pt\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This can take up to 15 minutes\n",
"decoder_compile_start_time = time.time()\n",
"traced_decoder = t5_models.parallel_trace_decoder(model, model_name, num_beams, max_length, tp_degree)\n",
"print(\"Decoder compilation time {}\".format(time.time() - decoder_compile_start_time))\n",
"\n",
"neuronx_distributed.trace.parallel_model_save(traced_decoder, \"TracedParallelDecoder.pt\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inference with the traced parallel T5 model\n",
"\n",
"With the traced model, let us try using beam search for inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Results:\n",
"1 Lassen Sie uns gutes Essen essen.\n",
"2 Lassen Sie uns gut essen.\n",
"3 Lassen Sie uns gutes Essen zu essen.\n",
"4 Lassen Sie uns gutes Essen zu sich nehmen.\n"
]
}
],
"source": [
"import neuronx_distributed\n",
"from wrapper import T5Wrapper\n",
"from transformers import T5Tokenizer\n",
"\n",
"\n",
"num_return_sequences = 4\n",
"\n",
"traced_encoder = neuronx_distributed.trace.parallel_model_load(\"TracedParallelEncoder.pt\")\n",
"traced_decoder = neuronx_distributed.trace.parallel_model_load(\"TracedParallelDecoder.pt\")\n",
"\n",
"tokenizer = T5Tokenizer.from_pretrained(model_name)\n",
"model = T5Wrapper.from_pretrained(model_name)\n",
"\n",
"model.encoder = traced_encoder\n",
"model.decoder = traced_decoder\n",
"setattr(model.encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
"\n",
"output = model.parallel_infer(tokenizer=tokenizer,\n",
" prompt=\"translate English to German: Lets eat good food.\",\n",
" max_length=max_length,\n",
" num_beams=num_beams,\n",
" num_return_sequences=num_return_sequences,\n",
" device=\"xla\")\n",
"\n",
"results = [tokenizer.decode(t, skip_special_tokens=True) for t in output]\n",
"\n",
"print('Results:')\n",
"for i, summary in enumerate(results):\n",
" print(i + 1, summary)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Benchmarking\n",
"\n",
"Let us benchmark the per token decoder latency"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Let us install NeuronPerf. We will use it to measure the performance.\n",
"! pip install neuronperf --extra-index-url=https://pip.repos.neuron.amazonaws.com"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os \n",
"import neuronperf as npf\n",
"\n",
"d_model = model.config.d_model\n",
"model_dir = \"TracedParallelDecoder.pt\"\n",
"decoder_run_count = 128\n",
"\n",
"def load_fn(model_path, **kwargs):\n",
" return neuronx_distributed.trace.parallel_model_load(model_path)\n",
" \n",
"# NeuronPerf can't see tp_degree at the moment, so just expose all cores\n",
"def env_setup_fn(*_):\n",
" del os.environ[\"NEURON_RT_VISIBLE_CORES\"]\n",
"\n",
"def benchmark():\n",
"\n",
" # Create some sample inputs for the decoder\n",
" decoder_input_ids = torch.ones((num_beams, 1), dtype=torch.int64)\n",
" decoder_attention_mask = torch.ones((num_beams, max_length), dtype=torch.int32)\n",
" encoder_attention_mask = torch.ones((num_beams, max_length), dtype=torch.int64)\n",
" encoder_hidden_states = torch.ones((num_beams, max_length, d_model), dtype=torch.float32)\n",
" beam_idx = torch.arange(0, num_beams, dtype=torch.int64)\n",
" beam_scores = torch.zeros((num_beams,), dtype=torch.float)\n",
"\n",
" inputs = (decoder_input_ids,\n",
" decoder_attention_mask,\n",
" encoder_hidden_states,\n",
" encoder_attention_mask,\n",
" beam_idx,\n",
" beam_scores)\n",
"\n",
" reports = npf.benchmark(\n",
" load_fn,\n",
" model_dir,\n",
" [inputs], \n",
" batch_sizes=1,\n",
" n_models=1,\n",
" max_infers=decoder_run_count,\n",
" workers_per_model=1, # no bottleneck on model inputs, so 1 is fine\n",
" env_setup_fn=env_setup_fn,\n",
" multiprocess=False,\n",
" )\n",
" \n",
" report = reports[0]\n",
"\n",
" # let's update throughput to be tokens / second and add a new recor\n",
" latency_in_s = report[\"latency_ms_avg\"] / 1000\n",
" tokens_per_s = decoder_run_count / latency_in_s\n",
" report[\"throughput_avg\"] = tokens_per_s\n",
" \n",
" # display and save results\n",
" npf.print_reports(reports, cols=[\"throughput_avg\", \"latency_ms_p50\", \"latency_ms_p99\"])\n",
" print(f\"Results saved to: {npf.write_json(reports[0])}\")\n",
"\n",
"benchmark()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now lets benchmark inference as a whole including sampling. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import torch\n",
"import neuronx_distributed\n",
"import neuronperf as npf\n",
"\n",
"from transformers import T5Tokenizer\n",
"from wrapper import T5Wrapper\n",
"\n",
"tokenizer = T5Tokenizer.from_pretrained(model_name)\n",
"\n",
"generated_token_count = 0\n",
"\n",
"class Wrapper(torch.nn.Module):\n",
" def __init__(self, \n",
" traced_encoder,\n",
" traced_decoder):\n",
" super().__init__()\n",
" self.model = T5Wrapper.from_pretrained(model_name)\n",
" self.model.encoder = traced_encoder\n",
" self.model.decoder = traced_decoder\n",
" setattr(self.model.encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
"\n",
" def forward(self, *inputs):\n",
" input_ids = inputs[0]['input_ids']\n",
" attention_mask = inputs[0]['attention_mask']\n",
" return self.model.parallel_infer(input_ids=input_ids,\n",
" attention_mask=attention_mask,\n",
" max_length=max_length,\n",
" num_beams=num_beams,\n",
" num_return_sequences=num_return_sequences)\n",
"\n",
"def load_fn(filename, **kwargs):\n",
" traced_encoder = neuronx_distributed.trace.parallel_model_load(filename + \"TracedParallelEncoder.pt\")\n",
" traced_decoder = neuronx_distributed.trace.parallel_model_load(filename + \"TracedParallelDecoder.pt\")\n",
" return Wrapper(traced_encoder, traced_decoder)\n",
"\n",
"# NeuronPerf can't see tp_degree at the moment, so just expose all cores\n",
"def env_setup_fn(*_):\n",
" del os.environ[\"NEURON_RT_VISIBLE_CORES\"]\n",
"\n",
"def preprocess_fn(inputs):\n",
" \n",
" encoding = []\n",
" for text in inputs:\n",
" batch_encoding = tokenizer(text, \n",
" max_length=max_length, \n",
" truncation=True, \n",
" padding='max_length',\n",
" return_tensors=\"pt\")\n",
" input_ids = batch_encoding['input_ids']\n",
" attention_mask = batch_encoding['attention_mask']\n",
" encoding.append({\"input_ids\": input_ids,\n",
" \"attention_mask\": attention_mask})\n",
" return encoding\n",
"\n",
"def postprocess_fn(outputs):\n",
" output = [tokenizer.decode(seq) for seq in outputs]\n",
" global generated_token_count \n",
" generated_token_count = len(outputs[0])\n",
" return output\n",
"\n",
"def benchmark():\n",
" inputs = [\"summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes.\"]\n",
" reports = npf.benchmark(\n",
" load_fn,\n",
" \"\", # Model dir\n",
" [inputs], \n",
" batch_sizes=1,\n",
" n_models=1,\n",
" max_infers=5,\n",
" max_duration=0, # sampling can take a while, so let's not timeout\n",
" workers_per_model=1, \n",
" env_setup_fn=env_setup_fn,\n",
" preprocess_fn=preprocess_fn,\n",
" postprocess_fn=postprocess_fn,\n",
" multiprocess=False,\n",
" )\n",
" \n",
" report = reports[0]\n",
"\n",
" report[\"throughput_avg\"] = round(generated_token_count / (report[\"latency_ms_avg\"] / 1000), 2)\n",
" report[\"latency_per_token_ms_p50\"] = round((report[\"latency_ms_p50\"])/generated_token_count, 2)\n",
" report[\"latency_per_token_ms_p99\"] = round((report[\"latency_ms_p99\"])/generated_token_count, 2)\n",
"\n",
" # display and save results\n",
" npf.print_reports(reports, cols=[\"throughput_avg\", \"latency_per_token_ms_p50\", \"latency_per_token_ms_p99\"])\n",
" print(f\"Results saved to: {npf.write_json(report)}\")\n",
"\n",
"benchmark()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "aws_neuron_venv_pytorch",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# T5 inference with Tensor Parallelism"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is an extension to the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html). Here we will use NeuronxDistributed to improve the inference performance using tensor parallelism.\n",
"\n",
"This tutorial has the following main sections:\n",
"\n",
"1. Install dependencies\n",
"1. Plug in `NeuronxDistributed` layers into T5\n",
"1. Compile the T5 model\n",
"1. Run distributed infernece with beam search \n",
"\n",
"This Jupyter notebook should be run on a Inf2 instance (`inf2.24xlarge`) or Trn1 isntance (`trn1.32xlarge`)\n",
"\n",
"\n",
"> Do note that flan-t5 models do not work with the code in this tutorial. We are working on fixing that. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Install dependencies\n",
"\n",
"The code in this tutorial is written for Jupyter Notebooks. To use Jupyter Notebook on the Neuron instance, you\n",
"can use this [guide](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/notebook/setup-jupyter-notebook-steps-troubleshooting.html).\n",
"\n",
"It is recommended to go through the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html) before you start this tutorial. \n",
"In addition to the dependencies in the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html), we need to install neuronx-distributed. \n",
"\n",
"This tutorial requires the following pip packages:\n",
"\n",
"- `torch-neuronx`\n",
"- `neuronx-cc`\n",
"- `transformers`\n",
"- `optimum-neuron`\n",
"- `neuronx-distributed`\n",
"\n",
"Most of these packages will be installed when configuring your environment using the Trn1/Inf2 setup guide. The additional dependencies must be installed here:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"! pip install --upgrade transformers==4.31.0 optimum-neuron==0.0.8 neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Plug in NeuronxDistributed layers into T5\n",
"\n",
"We extend the huggingface's T5 model to use the `NeuronxDistributed` parallel layers. To do so, we simply swap linear layers in `T5LayerSelfAttention`, `T5LayerCrossAttention`, and `T5LayerFF` definitions with `ColumnParallelLinear` and `RowParallelLinear`. We also need to swap the `Embedding` layer with `ParallelEmbedding`.\n",
"\n",
"Let us take the example of T5Attention. The [attention block](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/modeling_t5.py#L363-L366) has q, k, v, and o linear layers. \n",
"The multi-head attention block uses q, k and v to compute the attention scores. The attention scores are then passed through o to compute the attention block output. \n",
"So let us swap q, k and v layers with `ColumnParallelLinear` and o with `RowParallelLinear`. Having `RowParallelLinear` following a `ColumnParallelLinear` is a performance optimization. The attention scores computed with q, k and v are already split across Neuron devices. The row parallel layer can use this shared output directly. \n",
"The embedding layer is simply swapped with the `ParallelEmbedding`.\n",
"\n",
"```\n",
"class ParallelAttention(T5Attention):\n",
" def __init__(self, config: T5Config, has_relative_attention_bias=False):\n",
" super().__init__(config, has_relative_attention_bias)\n",
" # Per attention head and per partition values\n",
" world_size = parallel_state.get_tensor_model_parallel_size()\n",
" self.num_attention_heads_per_partition = divide(self.n_heads, world_size)\n",
" self.hidden_size_per_partition = self.num_attention_heads_per_partition * self.key_value_proj_dim\n",
"\n",
" # Mesh TensorFlow initialization to avoid scaling before softmax\n",
" self.q = ColumnParallelLinear(self.d_model,\n",
" self.inner_dim,\n",
" bias=False,\n",
" gather_output=False)\n",
" self.k = ColumnParallelLinear(self.d_model,\n",
" self.inner_dim,\n",
" bias=False,\n",
" gather_output=False)\n",
" self.v = ColumnParallelLinear(self.d_model,\n",
" self.inner_dim,\n",
" bias=False,\n",
" gather_output=False)\n",
" self.o = RowParallelLinear(self.inner_dim,\n",
" self.d_model,\n",
" bias=False,\n",
" input_is_parallel=True)\n",
"\n",
" if self.has_relative_attention_bias:\n",
" self.relative_attention_bias = ParallelEmbedding(self.relative_attention_num_buckets, self.n_heads)\n",
" self.n_heads = self.num_attention_heads_per_partition\n",
"...\n",
"```\n",
"\n",
"You can find the all modified T5 layers defined in [t5_model_layers.py](https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/examples/pytorch/neuronx_distributed/t5-inference/t5_model_layers.py). \n",
"\n",
"\n",
"Once we have the modified T5 layers, we can plug in the T5Attention and T5LayerFF into the pretrained model. Here is how you do that. \n",
"\n",
"```\n",
"def load_pretrained_with_parallel_attn(model_name):\n",
" \n",
" model = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=\"auto\")\n",
"\n",
" # Parallel implementation of Attention modules.\n",
" from t5_model_layers import ParallelSelfAttention, ParallelFF, ParallelCrossAttention\n",
"\n",
" for index, block in enumerate(model.decoder.block):\n",
" if index == 0:\n",
" block.layer[0] = ParallelSelfAttention(model.config,\n",
" has_relative_attention_bias=True)\n",
" else:\n",
" block.layer[0] = ParallelSelfAttention(model.config)\n",
" block.layer[1] = ParallelCrossAttention(model.config)\n",
" block.layer[2] = ParallelFF(model.config)\n",
" # Load the weights into the parallel layers \n",
" neuronx_distributed.parallel_layers.load(model_name + \".pt\", model, sharded=False)\n",
"\n",
" return model\n",
"\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Compile the parallel T5 model\n",
"\n",
"Let us set some model parameters"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model_name = \"t5-3b\"\n",
"max_length = 128\n",
"num_beams = 4\n",
"tp_degree = 8 # tensor parallelism degree"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Download and save the model that we want to trace. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from transformers import T5ForConditionalGeneration\n",
"\n",
"model = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=\"auto\")\n",
"torch.save({\"model\":model.state_dict()}, model_name + \".pt\")\n",
"model.config.use_cache = True"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To run HuggingFace T5 models on Neuron, we need to make a couple of changes. Let us reuse the code from the [t5 inference tutorial](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html) which makes T5 compatible with Neuron. For your convenience, the code copied into [wrapper.py](https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/examples/pytorch/neuronx_distributed/t5-inference/wrapper.py) and [t5_models.py](https://github.com/aws-neuron/aws-neuron-sdk/tree/master/src/examples/pytorch/neuronx_distributed/t5-inference/t5_models.py). This notebook will import these files. \n",
"\n",
"The only change made to this code is that we use `neuronx_distributed.trace` instead of `torch_neuronx.trace`. \n",
"\n",
"Let us trace the encoder and decoder. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import t5_models \n",
"import neuronx_distributed\n",
"import time \n",
"\n",
"# This can take up to 20 minutes\n",
"encoder_compile_start_time = time.time()\n",
"traced_encoder = t5_models.parallel_trace_encoder(model_name, max_length, num_beams, tp_degree)\n",
"print(\"Encoder compilation time {}\".format(time.time() - encoder_compile_start_time))\n",
"\n",
"neuronx_distributed.trace.parallel_model_save(traced_encoder, \"TracedParallelEncoder.pt\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# This can take up to 15 minutes\n",
"decoder_compile_start_time = time.time()\n",
"traced_decoder = t5_models.parallel_trace_decoder(model, model_name, num_beams, max_length, tp_degree)\n",
"print(\"Decoder compilation time {}\".format(time.time() - decoder_compile_start_time))\n",
"\n",
"neuronx_distributed.trace.parallel_model_save(traced_decoder, \"TracedParallelDecoder.pt\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Inference with the traced parallel T5 model\n",
"\n",
"With the traced model, let us try using beam search for inference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Results:\n",
"1 Lassen Sie uns gutes Essen essen.\n",
"2 Lassen Sie uns gut essen.\n",
"3 Lassen Sie uns gutes Essen zu essen.\n",
"4 Lassen Sie uns gutes Essen zu sich nehmen.\n"
]
}
],
"source": [
"import neuronx_distributed\n",
"from wrapper import T5Wrapper\n",
"from transformers import T5Tokenizer\n",
"\n",
"\n",
"num_return_sequences = 4\n",
"\n",
"traced_encoder = neuronx_distributed.trace.parallel_model_load(\"TracedParallelEncoder.pt\")\n",
"traced_decoder = neuronx_distributed.trace.parallel_model_load(\"TracedParallelDecoder.pt\")\n",
"\n",
"tokenizer = T5Tokenizer.from_pretrained(model_name)\n",
"model = T5Wrapper.from_pretrained(model_name)\n",
"\n",
"model.encoder = traced_encoder\n",
"model.decoder = traced_decoder\n",
"setattr(model.encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
"\n",
"output = model.parallel_infer(tokenizer=tokenizer,\n",
" prompt=\"translate English to German: Lets eat good food.\",\n",
" max_length=max_length,\n",
" num_beams=num_beams,\n",
" num_return_sequences=num_return_sequences,\n",
" device=\"xla\")\n",
"\n",
"results = [tokenizer.decode(t, skip_special_tokens=True) for t in output]\n",
"\n",
"print('Results:')\n",
"for i, summary in enumerate(results):\n",
" print(i + 1, summary)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Benchmarking\n",
"\n",
"Let us benchmark the per token decoder latency"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Let us install NeuronPerf. We will use it to measure the performance.\n",
"! pip install neuronperf --extra-index-url=https://pip.repos.neuron.amazonaws.com"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os \n",
"import neuronperf as npf\n",
"\n",
"d_model = model.config.d_model\n",
"model_dir = \"TracedParallelDecoder.pt\"\n",
"decoder_run_count = 128\n",
"\n",
"def load_fn(model_path, **kwargs):\n",
" return neuronx_distributed.trace.parallel_model_load(model_path)\n",
" \n",
"# NeuronPerf can't see tp_degree at the moment, so just expose all cores\n",
"def env_setup_fn(*_):\n",
" del os.environ[\"NEURON_RT_VISIBLE_CORES\"]\n",
"\n",
"def benchmark():\n",
"\n",
" # Create some sample inputs for the decoder\n",
" decoder_input_ids = torch.ones((num_beams, 1), dtype=torch.int64)\n",
" decoder_attention_mask = torch.ones((num_beams, max_length), dtype=torch.int32)\n",
" encoder_attention_mask = torch.ones((num_beams, max_length), dtype=torch.int64)\n",
" encoder_hidden_states = torch.ones((num_beams, max_length, d_model), dtype=torch.float32)\n",
" beam_idx = torch.arange(0, num_beams, dtype=torch.int64)\n",
" beam_scores = torch.zeros((num_beams,), dtype=torch.float)\n",
"\n",
" inputs = (decoder_input_ids,\n",
" decoder_attention_mask,\n",
" encoder_hidden_states,\n",
" encoder_attention_mask,\n",
" beam_idx,\n",
" beam_scores)\n",
"\n",
" reports = npf.benchmark(\n",
" load_fn,\n",
" model_dir,\n",
" [inputs], \n",
" batch_sizes=1,\n",
" n_models=1,\n",
" max_infers=decoder_run_count,\n",
" workers_per_model=1, # no bottleneck on model inputs, so 1 is fine\n",
" env_setup_fn=env_setup_fn,\n",
" multiprocess=False,\n",
" )\n",
" \n",
" report = reports[0]\n",
"\n",
" # let's update throughput to be tokens / second and add a new recor\n",
" latency_in_s = report[\"latency_ms_avg\"] / 1000\n",
" tokens_per_s = decoder_run_count / latency_in_s\n",
" report[\"throughput_avg\"] = tokens_per_s\n",
" \n",
" # display and save results\n",
" npf.print_reports(reports, cols=[\"throughput_avg\", \"latency_ms_p50\", \"latency_ms_p99\"])\n",
" print(f\"Results saved to: {npf.write_json(reports[0])}\")\n",
"\n",
"benchmark()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now lets benchmark inference as a whole including sampling. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import torch\n",
"import neuronx_distributed\n",
"import neuronperf as npf\n",
"\n",
"from transformers import T5Tokenizer\n",
"from wrapper import T5Wrapper\n",
"\n",
"tokenizer = T5Tokenizer.from_pretrained(model_name)\n",
"\n",
"generated_token_count = 0\n",
"\n",
"class Wrapper(torch.nn.Module):\n",
" def __init__(self, \n",
" traced_encoder,\n",
" traced_decoder):\n",
" super().__init__()\n",
" self.model = T5Wrapper.from_pretrained(model_name)\n",
" self.model.encoder = traced_encoder\n",
" self.model.decoder = traced_decoder\n",
" setattr(self.model.encoder, 'main_input_name', 'input_ids') # Attribute required by beam search\n",
"\n",
" def forward(self, *inputs):\n",
" input_ids = inputs[0]['input_ids']\n",
" attention_mask = inputs[0]['attention_mask']\n",
" return self.model.parallel_infer(input_ids=input_ids,\n",
" attention_mask=attention_mask,\n",
" max_length=max_length,\n",
" num_beams=num_beams,\n",
" num_return_sequences=num_return_sequences)\n",
"\n",
"def load_fn(filename, **kwargs):\n",
" traced_encoder = neuronx_distributed.trace.parallel_model_load(filename + \"TracedParallelEncoder.pt\")\n",
" traced_decoder = neuronx_distributed.trace.parallel_model_load(filename + \"TracedParallelDecoder.pt\")\n",
" return Wrapper(traced_encoder, traced_decoder)\n",
"\n",
"# NeuronPerf can't see tp_degree at the moment, so just expose all cores\n",
"def env_setup_fn(*_):\n",
" del os.environ[\"NEURON_RT_VISIBLE_CORES\"]\n",
"\n",
"def preprocess_fn(inputs):\n",
" \n",
" encoding = []\n",
" for text in inputs:\n",
" batch_encoding = tokenizer(text, \n",
" max_length=max_length, \n",
" truncation=True, \n",
" padding='max_length',\n",
" return_tensors=\"pt\")\n",
" input_ids = batch_encoding['input_ids']\n",
" attention_mask = batch_encoding['attention_mask']\n",
" encoding.append({\"input_ids\": input_ids,\n",
" \"attention_mask\": attention_mask})\n",
" return encoding\n",
"\n",
"def postprocess_fn(outputs):\n",
" output = [tokenizer.decode(seq) for seq in outputs]\n",
" global generated_token_count \n",
" generated_token_count = len(outputs[0])\n",
" return output\n",
"\n",
"def benchmark():\n",
" inputs = [\"summarize: The Inflation Reduction Act lowers prescription drug costs, health care costs, and energy costs. It's the most aggressive action on tackling the climate crisis in American history, which will lift up American workers and create good-paying, union jobs across the country. It'll lower the deficit and ask the ultra-wealthy and corporations to pay their fair share. And no one making under $400,000 per year will pay a penny more in taxes.\"]\n",
" reports = npf.benchmark(\n",
" load_fn,\n",
" \"\", # Model dir\n",
" [inputs], \n",
" batch_sizes=1,\n",
" n_models=1,\n",
" max_infers=5,\n",
" max_duration=0, # sampling can take a while, so let's not timeout\n",
" workers_per_model=1, \n",
" env_setup_fn=env_setup_fn,\n",
" preprocess_fn=preprocess_fn,\n",
" postprocess_fn=postprocess_fn,\n",
" multiprocess=False,\n",
" )\n",
" \n",
" report = reports[0]\n",
"\n",
" report[\"throughput_avg\"] = round(generated_token_count / (report[\"latency_ms_avg\"] / 1000), 2)\n",
" report[\"latency_per_token_ms_p50\"] = round((report[\"latency_ms_p50\"])/generated_token_count, 2)\n",
" report[\"latency_per_token_ms_p99\"] = round((report[\"latency_ms_p99\"])/generated_token_count, 2)\n",
"\n",
" # display and save results\n",
" npf.print_reports(reports, cols=[\"throughput_avg\", \"latency_per_token_ms_p50\", \"latency_per_token_ms_p99\"])\n",
" print(f\"Results saved to: {npf.write_json(report)}\")\n",
"\n",
"benchmark()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "aws_neuron_venv_pytorch",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
</pre></body></html>
|
2023-09-29T20:54:56.604Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/api-reference-guide.rst.txt
|
```
API Reference Guide
===================
.. toctree::
:maxdepth: 1
Runtime API </neuron-runtime/nrt-api-guide>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide
===================
.. toctree::
:maxdepth: 1
Runtime API </neuron-runtime/nrt-api-guide></pre></body></html>
|
2023-09-29T20:54:56.715Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tutorials/training.rst.txt
|
```
.. _tp_training_tutorial:
Training with Tensor Parallelism (``neuronx-distributed`` )
===========================================================
Keeping the above changes made in :ref:`Developer guide <tp_developer_guide>`, let’s now run an end-to-end training
with tensor-parallelism. This section is adopted from `BERT pretraining
tutorial <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/bert.html#hf-bert-pretraining-tutorial>`__
which used data-parallel training to scale the throughput. In this
section we modify that tutorial to showcase the use of
tensor-parallelism which should enable us to scale the size of the
model.
Setting up environment:
For this experiment, we will use a trn1-32xl machine with the storage
set to 512GB atleast.
Follow the instructions mentioned here:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`.
It is recommended to work out of python virtual env so as to avoid package installation issues.
We also have to install the ``neuronx-distributed`` package using the
following command:
.. code:: ipython3
python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com
Make sure the transformers version is set to ``4.26.0``
Let’s download the scripts and datasets for pretraining.
.. code:: ipython3
mkdir -p ~/examples/tp_dp_bert_hf_pretrain
cd ~/examples/tp_dp_bert_hf_pretrain
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_bert_hf_pretrain/tp_dp_bert_large_hf_pretrain_hdf5.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_bert_hf_pretrain/requirements.txt
python3 -m pip install -r requirements.txt
Next let’s download the tokenizer and the sharded datasets:
.. code:: ipython3
mkdir -p ~/examples_datasets/
pushd ~/examples_datasets/
aws s3 cp s3://neuron-s3/training_datasets/bert_pretrain_wikicorpus_tokenized_hdf5/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar . --no-sign-request
tar -xf bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
rm bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
popd
At this point, you are all set to start training
Running training
We first pre-compile the graphs using the ``neuron_parallel_compile``.
This process is similar to one discussed in the `BERT pretraining
tutorial <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/bert.html#hf-bert-pretraining-tutorial>`__
. Let’s run the command below:
.. code:: ipython3
cd ~/examples/tp_dp_bert_hf_pretrain
neuron_parallel_compile XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
tp_dp_bert_large_hf_pretrain_hdf5.py \
--tensor_parallel_size 8 \
--steps_this_run 10 \
--batch_size 64 \
--grad_accum_usteps 64 |& tee compile_log.txt
This script uses a tensor-parallel size of 8. This will automatically
set the data-parallel degree to 4 (32 workers / tensor_parallel_size).
Once the graphs are compiled we can now run training and observe our
loss go down. To run the training, we just the above command but without
``neuron_parallel_compile``.
.. code:: ipython3
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
tp_dp_bert_large_hf_pretrain_hdf5.py \
--tensor_parallel_size 8 \
--steps_this_run 10 \
--batch_size 64 \
--grad_accum_usteps 64 |& tee training_log.txt
You would notice that the throughput is lower when you run the
``dp_bert_large_hf_pretrain_hdf5.py``. This is expected as the number of
data-parallel workers have gone down (from 32 to 4). However, if you
open ``neuron-top`` in another terminal, you should see the memory
utilization per core for this script is lower than the
``dp_bert_large_hf_pretrain_hdf5.py``. Since the memory requirement has
gone down, you can scale the size of model either by increasing the
number of layers/attention heads/hidden sizes.
The loss curve should match to the loss curve we would get from the
data_parallel counterpart.
Known Issues:
~~~~~~~~~~~~~
1. Currently the checkpoints dumped during training are sharded and
users would have to write a script to combine the checkpoints
themselves. This should be fixed in the future release
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tp_training_tutorial:
Training with Tensor Parallelism (``neuronx-distributed`` )
===========================================================
Keeping the above changes made in :ref:`Developer guide <tp_developer_guide>`, let’s now run an end-to-end training
with tensor-parallelism. This section is adopted from `BERT pretraining
tutorial <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/bert.html#hf-bert-pretraining-tutorial>`__
which used data-parallel training to scale the throughput. In this
section we modify that tutorial to showcase the use of
tensor-parallelism which should enable us to scale the size of the
model.
Setting up environment:
For this experiment, we will use a trn1-32xl machine with the storage
set to 512GB atleast.
Follow the instructions mentioned here:
:ref:`Install PyTorch Neuron on Trn1 <setup-torch-neuronx>`.
It is recommended to work out of python virtual env so as to avoid package installation issues.
We also have to install the ``neuronx-distributed`` package using the
following command:
.. code:: ipython3
python -m pip install neuronx_distributed --extra-index-url https://pip.repos.neuron.amazonaws.com
Make sure the transformers version is set to ``4.26.0``
Let’s download the scripts and datasets for pretraining.
.. code:: ipython3
mkdir -p ~/examples/tp_dp_bert_hf_pretrain
cd ~/examples/tp_dp_bert_hf_pretrain
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_bert_hf_pretrain/tp_dp_bert_large_hf_pretrain_hdf5.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_bert_hf_pretrain/requirements.txt
python3 -m pip install -r requirements.txt
Next let’s download the tokenizer and the sharded datasets:
.. code:: ipython3
mkdir -p ~/examples_datasets/
pushd ~/examples_datasets/
aws s3 cp s3://neuron-s3/training_datasets/bert_pretrain_wikicorpus_tokenized_hdf5/bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar . --no-sign-request
tar -xf bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
rm bert_pretrain_wikicorpus_tokenized_hdf5_seqlen128.tar
popd
At this point, you are all set to start training
Running training
We first pre-compile the graphs using the ``neuron_parallel_compile``.
This process is similar to one discussed in the `BERT pretraining
tutorial <https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/bert.html#hf-bert-pretraining-tutorial>`__
. Let’s run the command below:
.. code:: ipython3
cd ~/examples/tp_dp_bert_hf_pretrain
neuron_parallel_compile XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
tp_dp_bert_large_hf_pretrain_hdf5.py \
--tensor_parallel_size 8 \
--steps_this_run 10 \
--batch_size 64 \
--grad_accum_usteps 64 |& tee compile_log.txt
This script uses a tensor-parallel size of 8. This will automatically
set the data-parallel degree to 4 (32 workers / tensor_parallel_size).
Once the graphs are compiled we can now run training and observe our
loss go down. To run the training, we just the above command but without
``neuron_parallel_compile``.
.. code:: ipython3
XLA_DOWNCAST_BF16=1 torchrun --nproc_per_node=32 \
tp_dp_bert_large_hf_pretrain_hdf5.py \
--tensor_parallel_size 8 \
--steps_this_run 10 \
--batch_size 64 \
--grad_accum_usteps 64 |& tee training_log.txt
You would notice that the throughput is lower when you run the
``dp_bert_large_hf_pretrain_hdf5.py``. This is expected as the number of
data-parallel workers have gone down (from 32 to 4). However, if you
open ``neuron-top`` in another terminal, you should see the memory
utilization per core for this script is lower than the
``dp_bert_large_hf_pretrain_hdf5.py``. Since the memory requirement has
gone down, you can scale the size of model either by increasing the
number of layers/attention heads/hidden sizes.
The loss curve should match to the loss curve we would get from the
data_parallel counterpart.
Known Issues:
~~~~~~~~~~~~~
1. Currently the checkpoints dumped during training are sharded and
users would have to write a script to combine the checkpoints
themselves. This should be fixed in the future release
</pre></body></html>
|
2023-09-29T20:54:56.734Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/nemo-megatron/index.rst.txt
|
```
.. _nemo-megatron-index:
AWS Neuron Reference for NeMo Megatron
======================================
AWS Neuron Reference for NeMo Megatron is a library that includes modified versions of the open-source packages `NeMo <https://github.com/NVIDIA/NeMo>`_ and `Apex <https://github.com/NVIDIA/apex>`_ that have been adapted for use with AWS Neuron and AWS EC2 Trn1 instances.
The library supports Tensor Parallel, Pipeline parallel and Data Parallel configurations for distributed training of large language models like GPT-3 175B. The APIs have been optimized for XLA based computation and high performance communication over Trainium instances.
The library uses various techniques to improve memory utilization such as sequence parallelism which reduces activation memory footprint, selective or full activation checkpointing which allows larger model configurations to fit. SPMD optimizations are also used whenever possible to reduce the number of graphs obtained.
.. dropdown:: Setup (``neuronx-nemo-megatron``)
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
The library can be installed from `neuronx-nemo-megatron github repo <https://github.com/aws-neuron/neuronx-nemo-megatron>`_
.. dropdown:: Tutorials (``neuronx-nemo-megatron``)
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* `Launch a GPT-3 pretraining job using neuronx-nemo-megatron <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-gpt-job.md>`_
* `Launch a Llama 2 pretraining job using neuronx-nemo-megatron <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-llamav2-job.md>`_
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _nemo-megatron-index:
AWS Neuron Reference for NeMo Megatron
======================================
AWS Neuron Reference for NeMo Megatron is a library that includes modified versions of the open-source packages `NeMo <https://github.com/NVIDIA/NeMo>`_ and `Apex <https://github.com/NVIDIA/apex>`_ that have been adapted for use with AWS Neuron and AWS EC2 Trn1 instances.
The library supports Tensor Parallel, Pipeline parallel and Data Parallel configurations for distributed training of large language models like GPT-3 175B. The APIs have been optimized for XLA based computation and high performance communication over Trainium instances.
The library uses various techniques to improve memory utilization such as sequence parallelism which reduces activation memory footprint, selective or full activation checkpointing which allows larger model configurations to fit. SPMD optimizations are also used whenever possible to reduce the number of graphs obtained.
.. dropdown:: Setup (``neuronx-nemo-megatron``)
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
The library can be installed from `neuronx-nemo-megatron github repo <https://github.com/aws-neuron/neuronx-nemo-megatron>`_
.. dropdown:: Tutorials (``neuronx-nemo-megatron``)
:class-title: sphinx-design-class-title-small
:class-body: sphinx-design-class-body-small
:animate: fade-in
* `Launch a GPT-3 pretraining job using neuronx-nemo-megatron <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-gpt-job.md>`_
* `Launch a Llama 2 pretraining job using neuronx-nemo-megatron <https://github.com/aws-neuron/aws-neuron-parallelcluster-samples/blob/master/examples/jobs/neuronx-nemo-megatron-llamav2-job.md>`_</pre></body></html>
|
2023-09-29T20:54:56.843Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tutorials/inference.rst.txt
|
```
.. _tp_inference_tutorial:
Inference with Tensor Parallelism (``neuronx-distributed``) [Experimental]
===========================================================================
Before we start, let's install transformers.
.. code:: ipython3
pip install transformers==4.26.0
For running model inference, we would need to trace the distributed
model. Before we run the inference, let’s get a checkpoint that we can
use. Let’s run the below block of code:
.. code:: ipython3
import torch
import torch_neuronx
import transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification
name = "bert-base-cased-finetuned-mrpc"
model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)
torch.save({"model":model.state_dict()}, "bert.pt")
If you already have a checkpoint from the tensor parallel training tutorial or by running
training from another source, feel free to skip the above step.
Once we have the checkpoint we are ready to trace the model and run
inference against it. Let’s look at the example below:
.. code:: ipython3
import os
import torch
import torch_neuronx
import transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers.models.bert.modeling_bert import BertSelfAttention, BertSelfOutput
import neuronx_distributed
from neuronx_distributed.parallel_layers import layers, parallel_state
def encode(tokenizer, *inputs, max_length=128, batch_size=1):
tokens = tokenizer.encode_plus(
*inputs,
max_length=max_length,
padding='max_length',
truncation=True,
return_tensors="pt"
)
return (
torch.repeat_interleave(tokens['input_ids'], batch_size, 0),
torch.repeat_interleave(tokens['attention_mask'], batch_size, 0),
torch.repeat_interleave(tokens['token_type_ids'], batch_size, 0),
)
# Create the tokenizer and model
name = "bert-base-cased-finetuned-mrpc"
tokenizer = AutoTokenizer.from_pretrained(name)
# Set up some example inputs
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = encode(tokenizer, sequence_1, sequence_2)
not_paraphrase = encode(tokenizer, sequence_1, sequence_1)
def get_model():
model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)
# Here we build a model with tensor-parallel layers.
# Note: If you already have a Model class that does this, we can use that directly
# and load the checkpoint in it.
class ParallelSelfAttention(BertSelfAttention):
def __init__(self, config, position_embedding_type=None):
super().__init__(config, position_embedding_type)
self.query = layers.ColumnParallelLinear(config.hidden_size, self.all_head_size, gather_output=False)
self.key = layers.ColumnParallelLinear(config.hidden_size, self.all_head_size, gather_output=False)
self.value = layers.ColumnParallelLinear(config.hidden_size, self.all_head_size, gather_output=False)
self.num_attention_heads = self.num_attention_heads // parallel_state.get_tensor_model_parallel_size()
self.all_head_size = self.all_head_size // parallel_state.get_tensor_model_parallel_size()
class ParallelSelfOutput(BertSelfOutput):
def __init__(self, config):
super().__init__(config)
self.dense = layers.RowParallelLinear(config.hidden_size,
config.hidden_size,
input_is_parallel=True)
for layer in model.bert.encoder.layer:
layer.attention.self = ParallelSelfAttention(model.config)
layer.attention.output = ParallelSelfOutput(model.config)
# Here we created a checkpoint as mentioned above. We pass sharded=False, since the checkpoint
# we obtained is unsharded. In case you are using the checkpoint from the tensor-parallel training,
# you can set the sharded=True, as that checkpoint will contain shards from each tp rank.
neuronx_distributed.parallel_layers.load("bert.pt", model, sharded=False)
# These io aliases would enable us to mark certain input tensors as state tensors. These
# state tensors are going to be device tensors.
io_aliases = {}
return model, io_aliases
if __name__ == "__main__":
# Note how we are passing a function that returns a model object, which needs to be traced.
# This is mainly done, since the model initialization needs to happen within the processes
# that get launched internally within the parallel_model_trace.
model = neuronx_distributed.trace.parallel_model_trace(get_model, paraphrase, tp_degree=2)
# Once traced, we now save the trace model for future inference. This API takes care
# of saving the checkpoint from each tensor parallel worker
neuronx_distributed.trace.parallel_model_save(model, "tp_models")
# We now load the saved model and will run inference against it
model = neuronx_distributed.trace.parallel_model_load("tp_models")
cpu_model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)
assert torch.argmax(model(*paraphrase)[0]) == torch.argmax(cpu_model(*paraphrase)[0])
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tp_inference_tutorial:
Inference with Tensor Parallelism (``neuronx-distributed``) [Experimental]
===========================================================================
Before we start, let's install transformers.
.. code:: ipython3
pip install transformers==4.26.0
For running model inference, we would need to trace the distributed
model. Before we run the inference, let’s get a checkpoint that we can
use. Let’s run the below block of code:
.. code:: ipython3
import torch
import torch_neuronx
import transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification
name = "bert-base-cased-finetuned-mrpc"
model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)
torch.save({"model":model.state_dict()}, "bert.pt")
If you already have a checkpoint from the tensor parallel training tutorial or by running
training from another source, feel free to skip the above step.
Once we have the checkpoint we are ready to trace the model and run
inference against it. Let’s look at the example below:
.. code:: ipython3
import os
import torch
import torch_neuronx
import transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers.models.bert.modeling_bert import BertSelfAttention, BertSelfOutput
import neuronx_distributed
from neuronx_distributed.parallel_layers import layers, parallel_state
def encode(tokenizer, *inputs, max_length=128, batch_size=1):
tokens = tokenizer.encode_plus(
*inputs,
max_length=max_length,
padding='max_length',
truncation=True,
return_tensors="pt"
)
return (
torch.repeat_interleave(tokens['input_ids'], batch_size, 0),
torch.repeat_interleave(tokens['attention_mask'], batch_size, 0),
torch.repeat_interleave(tokens['token_type_ids'], batch_size, 0),
)
# Create the tokenizer and model
name = "bert-base-cased-finetuned-mrpc"
tokenizer = AutoTokenizer.from_pretrained(name)
# Set up some example inputs
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = encode(tokenizer, sequence_1, sequence_2)
not_paraphrase = encode(tokenizer, sequence_1, sequence_1)
def get_model():
model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)
# Here we build a model with tensor-parallel layers.
# Note: If you already have a Model class that does this, we can use that directly
# and load the checkpoint in it.
class ParallelSelfAttention(BertSelfAttention):
def __init__(self, config, position_embedding_type=None):
super().__init__(config, position_embedding_type)
self.query = layers.ColumnParallelLinear(config.hidden_size, self.all_head_size, gather_output=False)
self.key = layers.ColumnParallelLinear(config.hidden_size, self.all_head_size, gather_output=False)
self.value = layers.ColumnParallelLinear(config.hidden_size, self.all_head_size, gather_output=False)
self.num_attention_heads = self.num_attention_heads // parallel_state.get_tensor_model_parallel_size()
self.all_head_size = self.all_head_size // parallel_state.get_tensor_model_parallel_size()
class ParallelSelfOutput(BertSelfOutput):
def __init__(self, config):
super().__init__(config)
self.dense = layers.RowParallelLinear(config.hidden_size,
config.hidden_size,
input_is_parallel=True)
for layer in model.bert.encoder.layer:
layer.attention.self = ParallelSelfAttention(model.config)
layer.attention.output = ParallelSelfOutput(model.config)
# Here we created a checkpoint as mentioned above. We pass sharded=False, since the checkpoint
# we obtained is unsharded. In case you are using the checkpoint from the tensor-parallel training,
# you can set the sharded=True, as that checkpoint will contain shards from each tp rank.
neuronx_distributed.parallel_layers.load("bert.pt", model, sharded=False)
# These io aliases would enable us to mark certain input tensors as state tensors. These
# state tensors are going to be device tensors.
io_aliases = {}
return model, io_aliases
if __name__ == "__main__":
# Note how we are passing a function that returns a model object, which needs to be traced.
# This is mainly done, since the model initialization needs to happen within the processes
# that get launched internally within the parallel_model_trace.
model = neuronx_distributed.trace.parallel_model_trace(get_model, paraphrase, tp_degree=2)
# Once traced, we now save the trace model for future inference. This API takes care
# of saving the checkpoint from each tensor parallel worker
neuronx_distributed.trace.parallel_model_save(model, "tp_models")
# We now load the saved model and will run inference against it
model = neuronx_distributed.trace.parallel_model_load("tp_models")
cpu_model = AutoModelForSequenceClassification.from_pretrained(name, torchscript=True)
assert torch.argmax(model(*paraphrase)[0]) == torch.argmax(cpu_model(*paraphrase)[0])
</pre></body></html>
|
2023-09-29T20:54:56.896Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.rst.txt
|
```
.. _gpt_neox_20b_tp_zero1_tutorial:
Training GPT-NeoX 20B with Tensor Parallelism and ZeRO-1 Optimizer (``neuronx-distributed`` )
=========================================================================================
In this section, we showcase to pretrain a GPT-NeoX 20B model by using the sequence parallel optimization
of tensor parallelism in the ``neuronx-distributed`` package. Please refer to the `Neuron Samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain>`__ to view the files in this tutorial.
This GPT-NeoX 20B tutorial differs from the :ref:`GPT-NeoX 6.9B tutorial<gpt_neox_tp_zero1_tutorial>` in the following ways:
* sequence parallel optimization has been applied
* parallel cross entropy has been applied
* the model size has been increased from 6.9B to 20B
* the TP degree has been increased from 8 to 32
Setting up environment is same as the :ref:`GPT-NeoX 6.9B tutorial<gpt_neox_tp_zero1_tutorial>`.
**Let’s download the scripts for pretraining:**
.. code:: ipython3
mkdir -p ~/examples/tp_dp_gpt_neox_hf_pretrain
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain.sh
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/modeling_gpt_neox_nxd.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/utils.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/adamw_fp32_optim_params.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/get_dataset.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/requirements.txt
python3 -m pip install -r requirements.txt
Next let’s download and pre-process the dataset:
.. code:: ipython3
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
python3 get_dataset.py
At this point, you are all set to start training.
**Running training**
We first pre-compile the graphs using the ``neuron_parallel_compile``.
Suppose the cluster queue name is ``compute1-dy-training-0`` and we are using node 1-4,
let’s run the command below:
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun neuron_parallel_compile bash $(pwd)/tp_dp_gpt_neox_20b_hf_pretrain.sh"
This script uses a tensor-parallel size of 32.
This will automatically set the zero-1 sharding degree to 4 (4 * 32 workers / tensor_parallel_size).
Once the graphs are compiled we can now run training and observe our loss goes down.
To run the training, we just the above command but without ``neuron_parallel_compile``.
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun bash $(pwd)/tp_dp_gpt_neox_20b_hf_pretrain.sh"
**Sequence Parallel**
We made the following model level modifications to enable sequence parallel:
* turn on ``sequence_parallel_enabled`` of ``ColumnParallelLinear`` and ``RowParallelLinear``
in ``GPTNeoXAttention`` and ``GPTNeoXMLP``;
* replace torch ``LayerNorm`` in ``GPTNeoXLayer`` and ``GPTNeoXModel`` with neuronx-distributed ``LayerNorm``
with ``sequence_parallel_enabled``
turned on;
* dimension transposition of intermediate states in the forward function of ``GPTNeoXAttention``.
* dimension transposition and collective communication of intermediate states in the forward function of ``GPTNeoXModel``.
In the training training script level, we enable:
* all-reduce sequence parallel gradients at the gradient accumulation boundary.
Please check `modeling_gpt_neox_nxd.py <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/modeling_gpt_neox_nxd.py>`__ and `tp_dp_gpt_neox_20b_hf_pretrain.py <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain.py>`__ for details.
**Parallel Cross Entropy**
To enable parallel cross entropy, we made the following model level modeifincations:
* replace the ``CrossEntropyLoss`` with neuronx-distributed ``parallel_cross_entropy`` in the forward
function of ``GPTNeoXForCausalLM``.
* ues ``ColumnParallelLinear`` for the ``embed_out`` layer in ``GPTNeoXForCausalLM``.
Please check ``modeling_gpt_neox_nxd.py`` for details.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _gpt_neox_20b_tp_zero1_tutorial:
Training GPT-NeoX 20B with Tensor Parallelism and ZeRO-1 Optimizer (``neuronx-distributed`` )
=========================================================================================
In this section, we showcase to pretrain a GPT-NeoX 20B model by using the sequence parallel optimization
of tensor parallelism in the ``neuronx-distributed`` package. Please refer to the `Neuron Samples repository <https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain>`__ to view the files in this tutorial.
This GPT-NeoX 20B tutorial differs from the :ref:`GPT-NeoX 6.9B tutorial<gpt_neox_tp_zero1_tutorial>` in the following ways:
* sequence parallel optimization has been applied
* parallel cross entropy has been applied
* the model size has been increased from 6.9B to 20B
* the TP degree has been increased from 8 to 32
Setting up environment is same as the :ref:`GPT-NeoX 6.9B tutorial<gpt_neox_tp_zero1_tutorial>`.
**Let’s download the scripts for pretraining:**
.. code:: ipython3
mkdir -p ~/examples/tp_dp_gpt_neox_hf_pretrain
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain.sh
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/modeling_gpt_neox_nxd.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/utils.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/adamw_fp32_optim_params.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/get_dataset.py
wget https://raw.githubusercontent.com/aws-neuron/aws-neuron-samples/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/common/requirements.txt
python3 -m pip install -r requirements.txt
Next let’s download and pre-process the dataset:
.. code:: ipython3
cd ~/examples/tp_dp_gpt_neox_hf_pretrain
python3 get_dataset.py
At this point, you are all set to start training.
**Running training**
We first pre-compile the graphs using the ``neuron_parallel_compile``.
Suppose the cluster queue name is ``compute1-dy-training-0`` and we are using node 1-4,
let’s run the command below:
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun neuron_parallel_compile bash $(pwd)/tp_dp_gpt_neox_20b_hf_pretrain.sh"
This script uses a tensor-parallel size of 32.
This will automatically set the zero-1 sharding degree to 4 (4 * 32 workers / tensor_parallel_size).
Once the graphs are compiled we can now run training and observe our loss goes down.
To run the training, we just the above command but without ``neuron_parallel_compile``.
.. code:: ipython3
sbatch --exclusive \
--nodelist=compute1-dy-training-0-[1-4] \
--wrap="srun bash $(pwd)/tp_dp_gpt_neox_20b_hf_pretrain.sh"
**Sequence Parallel**
We made the following model level modifications to enable sequence parallel:
* turn on ``sequence_parallel_enabled`` of ``ColumnParallelLinear`` and ``RowParallelLinear``
in ``GPTNeoXAttention`` and ``GPTNeoXMLP``;
* replace torch ``LayerNorm`` in ``GPTNeoXLayer`` and ``GPTNeoXModel`` with neuronx-distributed ``LayerNorm``
with ``sequence_parallel_enabled``
turned on;
* dimension transposition of intermediate states in the forward function of ``GPTNeoXAttention``.
* dimension transposition and collective communication of intermediate states in the forward function of ``GPTNeoXModel``.
In the training training script level, we enable:
* all-reduce sequence parallel gradients at the gradient accumulation boundary.
Please check `modeling_gpt_neox_nxd.py <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/modeling_gpt_neox_nxd.py>`__ and `tp_dp_gpt_neox_20b_hf_pretrain.py <https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/training/tp_dp_gpt_neox_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain/tp_dp_gpt_neox_20b_hf_pretrain.py>`__ for details.
**Parallel Cross Entropy**
To enable parallel cross entropy, we made the following model level modeifincations:
* replace the ``CrossEntropyLoss`` with neuronx-distributed ``parallel_cross_entropy`` in the forward
function of ``GPTNeoXForCausalLM``.
* ues ``ColumnParallelLinear`` for the ``embed_out`` layer in ``GPTNeoXForCausalLM``.
Please check ``modeling_gpt_neox_nxd.py`` for details.
</pre></body></html>
|
2023-09-29T20:54:56.923Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/nrt-troubleshoot.rst.txt
|
```
.. _nrt-troubleshooting:
Neuron Runtime Troubleshooting on Inf1, Inf2 and Trn1
=====================================================
This document aims to provide more information on how to fix issues you
might encounter while using the Neuron Runtime 2.x or above. For each
issue we will provide an explanation of what happened and what can
potentially correct the issue.
If your issue is not listed below or you have a more nuanced problem, contact
us via `issues <https://github.com/aws/aws-neuron-sdk/issues>`__ posted
to this repo, the `AWS Neuron developer
forum <https://forums.aws.amazon.com/forum.jspa?forumID=355>`__, or
through AWS support.
.. contents:: Table of contents
:local:
:depth: 2
Generic Errors
$$$$$$$$$$$$$$
Neuron Driver installation fails
--------------------------------
aws-neuron-dkms is a driver package which needs to be compiled during
installation. The compilation requires kernel headers for the instance's
kernel. ``uname -r`` can be used to find kernel version in the instance.
In some cases, the installed kernel headers might be newer than the
instance's kernel itself.
Please look at the aws-neuron-dkms installation log for message like the
following:
::
Building for 4.14.193-149.317.amzn2.x86_64
Module build for kernel 4.14.193-149.317.amzn2.x86_64 was skipped since the
kernel headers for this kernel does not seem to be installed.
If installation log is not available, check whether the module is
loaded.
::
$ lsmod | grep neuron
If the above has no output then that means ``aws-neuron-dkms``
installation is failed.
Solution
''''''''
1. Stop all applications using the NeuronCores.
2. Uninstall aws-neuron-dkms ``sudo apt remove aws-neuron-dkms`` or
``sudo yum remove aws-neuron-dkms``
3. Install kernel headers for the current kernel
``sudo apt install -y linux-headers-$(uname -r)`` or
``sudo yum install -y kernel-devel-$(uname -r) kernel-headers-$(uname -r)``
4. Install aws-neuron-dkms ``sudo apt install aws-neuron-dkms`` or
``sudo yum install aws-neuron-dkms``
------------
Application fails to start
--------------------------
Neuron Runtime requires Neuron Driver(aws-neuron-dkms package) to access Neuron
devices. If the driver is not installed then Neuron Runtime wont able to access the
Neuron devices and will fail with an error message in console and syslog.
If ``aws-neuron-dkms`` is not installed then the error message will be like the following::
2021-Aug-11 18:38:27.0917 13713:13713 ERROR NRT:nrt_init Unable to determine Neuron Driver version. Please check aws-neuron-dkms package is installed.
If ``aws-neuron-dkms`` is installed but does not support the latest runtime then the error message will be like the following::
2021-Aug-11 19:18:21.0661 24616:24616 ERROR NRT:nrt_init This runtime requires Neuron Driver version 2.0 or greater. Please upgrade aws-neuron-dkms package.
When using any supported framework from Neuron SDK version 2.5.0 and Neuron Driver (aws-neuron-dkms) versions 2.4 or older, Neuron Runtime will return the following error message::
2022-Dec-01 09:34:12.0559 138:138 ERROR HAL:aws_hal_tpb_pooling_write_profile failed programming the engine
Solution
''''''''
Please follow the installation steps in :ref:`setup-guide-index` to install ``aws-neuronx-dkms``.
------------
This Neuron Runtime (compatibility id: X) is not compatible with the installed aws-neuron-dkms package
------------------------------------------------------------------------------------------------------
This error is caused by incompatibility between the Neuron Driver (dkms package) and the Runtime Library (runtime-lib package). The driver remains backwards compatible with older versions of Neuron Runtime, but newer versions of the Runtime might rely on the functionality that is only provided by a newer driver. In that case, an update to the newer driver is required.
In some cases the compatibility error persists even after the driver has been updated. That happens when the update process fails to reload the driver at the end of the update. Note that ``$ modinfo neuron`` will misleadingly show the new version because modinfo reads the version information for neuron.ko file that’s been successfully replaced.
Reload failure happens because one of the processes is still using Neuron Devices and thus the driver cannot be reloaded.
Solution
''''''''
Check for any process that is still using the Neuron driver by running lsmod:
.. code:: bash
ubuntu@ip-10-1-200-50:~$ lsmod | grep neuron
neuron 237568 0
ubuntu@ip-10-1-200-50:~$
“Used by” counter, the second number, should be 0. If it is not, there is still a running process that is using Neuron. Terminate that process and either:
.. code:: bash
$ sudo rmmod neuron
$ sudo modprobe neuron
Or simply rerun the installation one more time. The driver logs its version in dmesg:
.. code:: base
$ sudo dmesg
...
[21531.105295] Neuron Driver Started with Version:2.9.4.0-8a6fdf292607dccc3b7059ebbe2fb24c60dfc7c4
A common culprit is a Jupyter process. If you are using Jupyter on the instance, make sure to terminate Jupyter process before updating the driver.
------------
Neuron Core is in use
---------------------
A Neuron Core cant be shared between two applications. If an application
started using a Neuron Core all other applications trying to use the
NeuronCore would fail during runtime initialization with the following
message in the console and in syslog:
.. code:: bash
2021-Aug-27 23:22:12.0323 28078:28078 ERROR NRT:nrt_allocate_neuron_cores NeuronCore(s) not available - Requested:nc1-nc1 Available:0
Solution
''''''''
Terminate any other processes that are using NeuronCore devices and then try launching the application again. If you are using Jupyter, ensure that you only have a single Jupyter kernel attempting to access the NeuronCores by restarting or shutting-down any other kernels, which will release any NeuronCores that might be in use.
------------
Unsupported NEFF Version
------------------------
While loading a model(NEFF), Neuron Runtime checks the version compatibility.
If the version the NEFF is incompatible with Runtime then it would fail the
model load with following error message:
::
NEFF version mismatch supported: 1.1 received: 2.0
Solution
''''''''
Use compatible versions of Neuron Compiler and Runtime. Updating to the
latest version of both Neuron Compiler and Neuron Runtime is the
simplest solution. If updating one of the two is not an option, please
refer to the :ref:`neuron-runtime-release-notes`
of the Neuron Runtime to determine NEFF version support.
------------
Unsupported Hardware Operator Code
----------------------------------
While loading a model(NEFF), Neuron Runtime checks whether the hardware operators are supported or not. If unsupported,
Neuron Runtime will display the following error messages:
::
2023-Jul-28 22:23:13.0357 101413:101422 ERROR TDRV:translate_one_pseudo_instr_v2 Unsupported hardware operator code 214 found in neff.
2023-Jul-28 22:23:13.0357 101413:101422 ERROR TDRV:translate_one_pseudo_instr_v2 Please make sure to upgrade to latest aws-neuronx-runtime-lib and aws-neuronx-collective; for detailed installation instructions visit Neuron documentation.
Solution
''''''''
Upgrade to latest Neuron Runtime and Neuron Collectives.
------------
Insufficient Memory
-------------------
While loading a model(NEFF), Neuron Runtime reserves both device and host memory
for storing weights, ifmap and ofmap of the Model. The memory consumption of
each model is different. If Neuron Runtime is unable to allocate memory then
the model load would fail with the following message in syslog
::
kernel: [XXXXX] neuron:mc_alloc: device mempool [0:0] total 1073741568 occupied 960539030 needed 1272 available 768
Solution
''''''''
As the error is contextual to what's going on with your instance, the
exact next step is unclear. Try unloading some of the loaded models
which will free up device DRAM space. If this is still a problem, moving
to a larger Inf1 instance size with additional NeuronCores may help.
------------
Insufficient number of NeuronCores
----------------------------------
The NEFF requires more NeuronCores than available on the instance.
Check for error messages in syslog similar to:
::
NRT: 26638:26638 ERROR TDRV:db_vtpb_get_mla_and_tpb Could not find VNC id n
NRT: 26638:26638 ERROR NMGR:dlr_kelf_stage Failed to create shared io
NRT: 26638:26638 ERROR NMGR:stage_kelf_models Failed to stage graph: kelf-a.json to NeuronCore
NRT: 26638:26638 ERROR NMGR:kmgr_load_nn_post_metrics Failed to load NN: xxxxxxx, err: 2
Solution
''''''''
The NeuronCores may be in use by models you are not actively using.
Ensure you've unloaded models you're not using and terminated unused applications.
If this is still a problem, moving to a larger Inf1 instance
size with additional NeuronCores may help.
--------------
Numerical Error
---------------
Neuron Devices will detect any NaN generated during execution and
report it. If Neuron Runtime sees NaNs are generated then it would
fail the execution request with Numerical Error with the following
message:
::
nrtd[nnnnn]: .... Error notifications found on NC .... INFER_ERROR_SUBTYPE_NUMERICAL
Solution
''''''''
This usually an indication of either error in the model or error in the
input.
Report issue to Neuron by posting the relevant details on GitHub
`issues <https://github.com/aws/aws-neuron-sdk/issues>`__.
Memory Errors
$$$$$$$$$$$$$
Transient memory errors
-----------------------
::
Uncorrectable memory error is detected on Neuron device: 5:1 metadata: 0x2. The error might cause incorrect computational results and might affect training convergence. Please
terminate and restart from the last checkpoint if the convergence is impacted.
Solution
^^^^^^^^
Neuron detected a single uncorrectable bit flip in the device memory.
The execution can continue but there is a possibility of a numerical
error. If this is a concern, terminate and restart from the last known
good check point.
Persistent memory errors
------------------------
::
Uncorrectable memory error is detected on Neuron device: 5:1 metadata: 0x2. Failing execution.
.. _solution-1:
Solution
^^^^^^^^
Multiple uncorrectable errors are detected during execution. The
execution cannot continue. This is most likely caused by faulty
hardware. Terminate and move to a different instance.
Failure to initialize Neuron
----------------------------
::
nd0 nc0 Timestamp program stop timeout (1000 ms)
nd0 nc0 Error while waiting for timestamp program to end on TPB eng 0
nd0 nc0 Failed to stop neuron core
nd0 nc0 Failed to end timestamp sync programs
TDRV not initialized
Failed to initialize devices, error:5
.. _solution-2:
Solution
^^^^^^^^
Previously executed application left Neuron devices in running state.
Reset Neuron devices but reloading Neuron Driver. Note, this is a
temporary workaround, future versions of Neuron will reset running
devices automatically.
::
sudo rmmod neuron; sudo modprobe neuron
An application is trying to use more cores that are available on the instance
-----------------------------------------------------------------------------
::
Could not open the nd1
.. _solution-3:
Solution
^^^^^^^^
Use properly sized instance. trn1.32xlarge has 32 Neuron Cores,
trn1.2xlarge has 2 Neuron Cores.
EFA and Collective Communication Errors
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Missing aws-neuronx-collectives package
---------------------------------------
**aws-neuronx-collectives** package is required to execute Collective
Communication on a single instance and across multiple instances.
::
NCCL init error: Error opening libnccom.so, cannot use collective operations! Please set LD_LIBRARY_PATH to library location. Error: libnccom.so: cannot open shared object
file: No such file or directory
Please make sure to install correct version of aws-neuronx-collectives; for detailed installation instructions visit Neuron documentation
.. _solution-4:
Solution
^^^^^^^^
Install aws-neuornx-collectives package. If the installation used
non-default destination set LD_LIBRARY_PATH.
.. _missing-efa-installer-package:
Missing efa installer package.
------------------------------
**efa-installer** package is required to execute Collective
Communication across multiple instances.
::
Unable to run multi-instance workload. Ofi plugin is not installed or EFA is not enabled
.. _solution-5:
Solution
^^^^^^^^
Follow the directions to install efa-installer package. Make sure to add
the path to to libfabric library to LD_LIBRARY_PATH
.. _efa-is-not-enabled-in-trn132xlarage:
EFA is not enabled in trn1.32xlarage
------------------------------------
EFA is used as a transport for Collective Communication among multiple
instances. EFA must be enabled on the instances used for multi-node
training.
::
OFI plugin initNet() failed is EFA enabled?
.. _solution-6:
Solution
^^^^^^^^
Confirm that EFA is enabled by running lspci command and making sure
there are eight EFA devices. For example:
::
[ec2-user@ip-10-0-13-247 ~]$ lspci -tv
-+-[0000:a0]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
+-[0000:90]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
+-[0000:20]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
+-[0000:10]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
\-[0000:00]-+-00.0 Intel Corporation 440FX - 82441FX PMC [Natoma]
+-01.0 Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
+-01.3 Intel Corporation 82371AB/EB/MB PIIX4 ACPI
+-03.0 Amazon.com, Inc. Device 1111
+-04.0 Amazon.com, Inc. NVMe EBS Controller
\-1f.0 Amazon.com, Inc. NVMe EBS Controller
Launch instances with EFA enabled and try again. If not planning to use
the instances for multi-node training or running on trn1.2xlarge, this
error message can be ignored.
Communication timeout
---------------------
Ranks exchange information during NEFF loading and before the start of
the execution. The loading/execution cannot move forward until all ranks
are ready.
::
Timeout waiting for RX (waited 120 sec) - retrying
::
Timeout waiting for incoming connection (waited 120 sec) - retrying
::
Connect to localhost:33666 failed - retrying
.. _solution-7:
Solution
^^^^^^^^
The communication timeouts are not fatal. The ranks will continue
waiting forever. In most case the timeouts are caused by one of the
ranks getting delayed, usually be recompilation of a graph. The
execution is resumed after the graph is compiled (might take significant
amount of time). It is possible to determine if compilation is in
progress by checking the logs on all nodes.
Communication timeouts might also indicate that one of the nodes or
ranks is hang. If that is the case, terminate the run and restart from
the last known good check point.
.. _communication-errors:
Communication errors.
---------------------
::
RX, connection closed by remote peer
There could be other similar messages indicating that ranks failed to
communicate.
.. _solution-8:
Solution
^^^^^^^^
One of the ranks or nodes encountered a problem and terminated.
Terminate the run and restart from the last known check point.
.. _efa-kernel-messages-dmesg-after-process-termination:
EFA Kernel messages (dmesg) after process termination.
------------------------------------------------------
::
[298850.502143] neuron:npid_detach: neuron:npid_detach: pid=90193, slot=0
[298850.919248] efa 0000:a0:1a.0 rdmap160s26: Failed to process command DEREG_MR (opcode 8) comp_status 7 err -22
.. _solution-9:
Solution
^^^^^^^^
When a process that executed Collective Communication terminates it
deregisters buffers that were registered with the networking stack.
There is a race condition because the Neuron driver deregisters buffers
owned by terminating process as part of the memory cleanup. The error is
benign and will be removed in the future releases.
Failure to find bootstrap interface
-----------------------------------
::
No interface found in the same subnet as remote address fe80::1461:22ff:fe33:b471<45015>
No usable listening interface found
.. _solution-10:
Solution
^^^^^^^^
Bootstrap code incorrectly trying to use link-local IPv6 address for
communication. This error will be fixed in the next Neuron release. In
the meantime, as a workaround, disable IPv6 on the instances.
::
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
Name resolution failure
-----------------------
.. code:: bash
WARN Invalid NCCL_COMM_ID [compute1-st-kaena-training-0-1.pcluster-trn1-24-pdx80-2n.pcluster:41211], please use format: <ipv4>:<port> or [<ipv6>]:<port>
.. _solution-11:
Solution
^^^^^^^^
Verify that the name can be resolved by DNS by using nslookup or dig. Currently released version fails to resolve FQDN longer than 63 characters. This error will be fixed in the upcoming Neuron release. In the mean time use shorter names to ensure that FQDN length does not exceed the maximum of 63 characters.
Usage of Neuron Custom C++ Operators
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Neuron Runtime timeout or GPSIMD exception
------------------------------------------
At this point, reset of Neuron Runtime is required after running a model which
invoked a Neuron Custom C++ operator. Otherwise, a Neuron Runtime timeout or
GPSIMD exception may occur.
Example Neuron Runtime timeout:
::
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:1)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:2)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:3)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:4)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:0)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_infer_status_notifications (FATAL-RT-UNDEFINED-STATE) inference timeout (600000 ms) on Neuron Device 0 NC 0, waiting for execution completion notification
2023-Jan-09 20:27:41.0600 15042:15042 ERROR NMGR:dlr_infer Inference completed with err: 5
Example GPSIMD exception:
::
2023-Jan-06 22:28:01.0845 137472:137472 ERROR TDRV:pool_stdio_queue_consume_all_entries Printing stderr from GPSIMD:
GPSIMD EXCEPTION OCCURRED: ILLEGAL INSTRUCTION
Subtype/Type/Cause: 0x201
Exception PC: 0x840001E8
Solution
''''''''
If either of the above errors are seen, and ``NEURON_RT_RESET_CORES`` is set to
0, either unset it or set it to 1. This will enable the default runtime
behaviour of resetting NeuronCores when initializing applications. See
:ref:`nrt-configuration` for more information.
Also note that the timeout period can be changed by setting
``NEURON_RT_EXEC_TIMEOUT``. See :ref:`nrt-configuration` for more information.
FI_EFA_FORK_SAFE
----------------
Older Linux (<5.15) kernels require environment variable FI_EFA_FORK_SAFE to be set to 1 for the libfabric to operate correctly. Specifically Amazon Linux 2 uses 5.10 kernel and requires the variable to be set.
When the variable is not set multi-node collective communication will be disabled. Intra-node collective communication is still possible. The following error message will be logged the first time a model containing collective communication is loaded:
::
Linux kernel 5.10 requires setting FI_EFA_FORK_SAFE=1 environment variable. Multi-node support will be disabled.
Please restart with FI_EFA_FORK_SAFE=1 set."
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _nrt-troubleshooting:
Neuron Runtime Troubleshooting on Inf1, Inf2 and Trn1
=====================================================
This document aims to provide more information on how to fix issues you
might encounter while using the Neuron Runtime 2.x or above. For each
issue we will provide an explanation of what happened and what can
potentially correct the issue.
If your issue is not listed below or you have a more nuanced problem, contact
us via `issues <https://github.com/aws/aws-neuron-sdk/issues>`__ posted
to this repo, the `AWS Neuron developer
forum <https://forums.aws.amazon.com/forum.jspa?forumID=355>`__, or
through AWS support.
.. contents:: Table of contents
:local:
:depth: 2
Generic Errors
$$$$$$$$$$$$$$
Neuron Driver installation fails
--------------------------------
aws-neuron-dkms is a driver package which needs to be compiled during
installation. The compilation requires kernel headers for the instance's
kernel. ``uname -r`` can be used to find kernel version in the instance.
In some cases, the installed kernel headers might be newer than the
instance's kernel itself.
Please look at the aws-neuron-dkms installation log for message like the
following:
::
Building for 4.14.193-149.317.amzn2.x86_64
Module build for kernel 4.14.193-149.317.amzn2.x86_64 was skipped since the
kernel headers for this kernel does not seem to be installed.
If installation log is not available, check whether the module is
loaded.
::
$ lsmod | grep neuron
If the above has no output then that means ``aws-neuron-dkms``
installation is failed.
Solution
''''''''
1. Stop all applications using the NeuronCores.
2. Uninstall aws-neuron-dkms ``sudo apt remove aws-neuron-dkms`` or
``sudo yum remove aws-neuron-dkms``
3. Install kernel headers for the current kernel
``sudo apt install -y linux-headers-$(uname -r)`` or
``sudo yum install -y kernel-devel-$(uname -r) kernel-headers-$(uname -r)``
4. Install aws-neuron-dkms ``sudo apt install aws-neuron-dkms`` or
``sudo yum install aws-neuron-dkms``
------------
Application fails to start
--------------------------
Neuron Runtime requires Neuron Driver(aws-neuron-dkms package) to access Neuron
devices. If the driver is not installed then Neuron Runtime wont able to access the
Neuron devices and will fail with an error message in console and syslog.
If ``aws-neuron-dkms`` is not installed then the error message will be like the following::
2021-Aug-11 18:38:27.0917 13713:13713 ERROR NRT:nrt_init Unable to determine Neuron Driver version. Please check aws-neuron-dkms package is installed.
If ``aws-neuron-dkms`` is installed but does not support the latest runtime then the error message will be like the following::
2021-Aug-11 19:18:21.0661 24616:24616 ERROR NRT:nrt_init This runtime requires Neuron Driver version 2.0 or greater. Please upgrade aws-neuron-dkms package.
When using any supported framework from Neuron SDK version 2.5.0 and Neuron Driver (aws-neuron-dkms) versions 2.4 or older, Neuron Runtime will return the following error message::
2022-Dec-01 09:34:12.0559 138:138 ERROR HAL:aws_hal_tpb_pooling_write_profile failed programming the engine
Solution
''''''''
Please follow the installation steps in :ref:`setup-guide-index` to install ``aws-neuronx-dkms``.
------------
This Neuron Runtime (compatibility id: X) is not compatible with the installed aws-neuron-dkms package
------------------------------------------------------------------------------------------------------
This error is caused by incompatibility between the Neuron Driver (dkms package) and the Runtime Library (runtime-lib package). The driver remains backwards compatible with older versions of Neuron Runtime, but newer versions of the Runtime might rely on the functionality that is only provided by a newer driver. In that case, an update to the newer driver is required.
In some cases the compatibility error persists even after the driver has been updated. That happens when the update process fails to reload the driver at the end of the update. Note that ``$ modinfo neuron`` will misleadingly show the new version because modinfo reads the version information for neuron.ko file that’s been successfully replaced.
Reload failure happens because one of the processes is still using Neuron Devices and thus the driver cannot be reloaded.
Solution
''''''''
Check for any process that is still using the Neuron driver by running lsmod:
.. code:: bash
ubuntu@ip-10-1-200-50:~$ lsmod | grep neuron
neuron 237568 0
ubuntu@ip-10-1-200-50:~$
“Used by” counter, the second number, should be 0. If it is not, there is still a running process that is using Neuron. Terminate that process and either:
.. code:: bash
$ sudo rmmod neuron
$ sudo modprobe neuron
Or simply rerun the installation one more time. The driver logs its version in dmesg:
.. code:: base
$ sudo dmesg
...
[21531.105295] Neuron Driver Started with Version:2.9.4.0-8a6fdf292607dccc3b7059ebbe2fb24c60dfc7c4
A common culprit is a Jupyter process. If you are using Jupyter on the instance, make sure to terminate Jupyter process before updating the driver.
------------
Neuron Core is in use
---------------------
A Neuron Core cant be shared between two applications. If an application
started using a Neuron Core all other applications trying to use the
NeuronCore would fail during runtime initialization with the following
message in the console and in syslog:
.. code:: bash
2021-Aug-27 23:22:12.0323 28078:28078 ERROR NRT:nrt_allocate_neuron_cores NeuronCore(s) not available - Requested:nc1-nc1 Available:0
Solution
''''''''
Terminate any other processes that are using NeuronCore devices and then try launching the application again. If you are using Jupyter, ensure that you only have a single Jupyter kernel attempting to access the NeuronCores by restarting or shutting-down any other kernels, which will release any NeuronCores that might be in use.
------------
Unsupported NEFF Version
------------------------
While loading a model(NEFF), Neuron Runtime checks the version compatibility.
If the version the NEFF is incompatible with Runtime then it would fail the
model load with following error message:
::
NEFF version mismatch supported: 1.1 received: 2.0
Solution
''''''''
Use compatible versions of Neuron Compiler and Runtime. Updating to the
latest version of both Neuron Compiler and Neuron Runtime is the
simplest solution. If updating one of the two is not an option, please
refer to the :ref:`neuron-runtime-release-notes`
of the Neuron Runtime to determine NEFF version support.
------------
Unsupported Hardware Operator Code
----------------------------------
While loading a model(NEFF), Neuron Runtime checks whether the hardware operators are supported or not. If unsupported,
Neuron Runtime will display the following error messages:
::
2023-Jul-28 22:23:13.0357 101413:101422 ERROR TDRV:translate_one_pseudo_instr_v2 Unsupported hardware operator code 214 found in neff.
2023-Jul-28 22:23:13.0357 101413:101422 ERROR TDRV:translate_one_pseudo_instr_v2 Please make sure to upgrade to latest aws-neuronx-runtime-lib and aws-neuronx-collective; for detailed installation instructions visit Neuron documentation.
Solution
''''''''
Upgrade to latest Neuron Runtime and Neuron Collectives.
------------
Insufficient Memory
-------------------
While loading a model(NEFF), Neuron Runtime reserves both device and host memory
for storing weights, ifmap and ofmap of the Model. The memory consumption of
each model is different. If Neuron Runtime is unable to allocate memory then
the model load would fail with the following message in syslog
::
kernel: [XXXXX] neuron:mc_alloc: device mempool [0:0] total 1073741568 occupied 960539030 needed 1272 available 768
Solution
''''''''
As the error is contextual to what's going on with your instance, the
exact next step is unclear. Try unloading some of the loaded models
which will free up device DRAM space. If this is still a problem, moving
to a larger Inf1 instance size with additional NeuronCores may help.
------------
Insufficient number of NeuronCores
----------------------------------
The NEFF requires more NeuronCores than available on the instance.
Check for error messages in syslog similar to:
::
NRT: 26638:26638 ERROR TDRV:db_vtpb_get_mla_and_tpb Could not find VNC id n
NRT: 26638:26638 ERROR NMGR:dlr_kelf_stage Failed to create shared io
NRT: 26638:26638 ERROR NMGR:stage_kelf_models Failed to stage graph: kelf-a.json to NeuronCore
NRT: 26638:26638 ERROR NMGR:kmgr_load_nn_post_metrics Failed to load NN: xxxxxxx, err: 2
Solution
''''''''
The NeuronCores may be in use by models you are not actively using.
Ensure you've unloaded models you're not using and terminated unused applications.
If this is still a problem, moving to a larger Inf1 instance
size with additional NeuronCores may help.
--------------
Numerical Error
---------------
Neuron Devices will detect any NaN generated during execution and
report it. If Neuron Runtime sees NaNs are generated then it would
fail the execution request with Numerical Error with the following
message:
::
nrtd[nnnnn]: .... Error notifications found on NC .... INFER_ERROR_SUBTYPE_NUMERICAL
Solution
''''''''
This usually an indication of either error in the model or error in the
input.
Report issue to Neuron by posting the relevant details on GitHub
`issues <https://github.com/aws/aws-neuron-sdk/issues>`__.
Memory Errors
$$$$$$$$$$$$$
Transient memory errors
-----------------------
::
Uncorrectable memory error is detected on Neuron device: 5:1 metadata: 0x2. The error might cause incorrect computational results and might affect training convergence. Please
terminate and restart from the last checkpoint if the convergence is impacted.
Solution
^^^^^^^^
Neuron detected a single uncorrectable bit flip in the device memory.
The execution can continue but there is a possibility of a numerical
error. If this is a concern, terminate and restart from the last known
good check point.
Persistent memory errors
------------------------
::
Uncorrectable memory error is detected on Neuron device: 5:1 metadata: 0x2. Failing execution.
.. _solution-1:
Solution
^^^^^^^^
Multiple uncorrectable errors are detected during execution. The
execution cannot continue. This is most likely caused by faulty
hardware. Terminate and move to a different instance.
Failure to initialize Neuron
----------------------------
::
nd0 nc0 Timestamp program stop timeout (1000 ms)
nd0 nc0 Error while waiting for timestamp program to end on TPB eng 0
nd0 nc0 Failed to stop neuron core
nd0 nc0 Failed to end timestamp sync programs
TDRV not initialized
Failed to initialize devices, error:5
.. _solution-2:
Solution
^^^^^^^^
Previously executed application left Neuron devices in running state.
Reset Neuron devices but reloading Neuron Driver. Note, this is a
temporary workaround, future versions of Neuron will reset running
devices automatically.
::
sudo rmmod neuron; sudo modprobe neuron
An application is trying to use more cores that are available on the instance
-----------------------------------------------------------------------------
::
Could not open the nd1
.. _solution-3:
Solution
^^^^^^^^
Use properly sized instance. trn1.32xlarge has 32 Neuron Cores,
trn1.2xlarge has 2 Neuron Cores.
EFA and Collective Communication Errors
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Missing aws-neuronx-collectives package
---------------------------------------
**aws-neuronx-collectives** package is required to execute Collective
Communication on a single instance and across multiple instances.
::
NCCL init error: Error opening libnccom.so, cannot use collective operations! Please set LD_LIBRARY_PATH to library location. Error: libnccom.so: cannot open shared object
file: No such file or directory
Please make sure to install correct version of aws-neuronx-collectives; for detailed installation instructions visit Neuron documentation
.. _solution-4:
Solution
^^^^^^^^
Install aws-neuornx-collectives package. If the installation used
non-default destination set LD_LIBRARY_PATH.
.. _missing-efa-installer-package:
Missing efa installer package.
------------------------------
**efa-installer** package is required to execute Collective
Communication across multiple instances.
::
Unable to run multi-instance workload. Ofi plugin is not installed or EFA is not enabled
.. _solution-5:
Solution
^^^^^^^^
Follow the directions to install efa-installer package. Make sure to add
the path to to libfabric library to LD_LIBRARY_PATH
.. _efa-is-not-enabled-in-trn132xlarage:
EFA is not enabled in trn1.32xlarage
------------------------------------
EFA is used as a transport for Collective Communication among multiple
instances. EFA must be enabled on the instances used for multi-node
training.
::
OFI plugin initNet() failed is EFA enabled?
.. _solution-6:
Solution
^^^^^^^^
Confirm that EFA is enabled by running lspci command and making sure
there are eight EFA devices. For example:
::
[ec2-user@ip-10-0-13-247 ~]$ lspci -tv
-+-[0000:a0]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
+-[0000:90]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
+-[0000:20]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
+-[0000:10]-+-00.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-01.0 Amazon.com, Inc. Elastic Network Adapter (ENA)
| +-19.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1a.0 Amazon.com, Inc. Elastic Fabric Adapter (EFA)
| +-1b.0 Amazon.com, Inc. NeuronDevice
| +-1c.0 Amazon.com, Inc. NeuronDevice
| +-1d.0 Amazon.com, Inc. NeuronDevice
| +-1e.0 Amazon.com, Inc. NeuronDevice
| \-1f.0 Amazon.com, Inc. NVMe SSD Controller
\-[0000:00]-+-00.0 Intel Corporation 440FX - 82441FX PMC [Natoma]
+-01.0 Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
+-01.3 Intel Corporation 82371AB/EB/MB PIIX4 ACPI
+-03.0 Amazon.com, Inc. Device 1111
+-04.0 Amazon.com, Inc. NVMe EBS Controller
\-1f.0 Amazon.com, Inc. NVMe EBS Controller
Launch instances with EFA enabled and try again. If not planning to use
the instances for multi-node training or running on trn1.2xlarge, this
error message can be ignored.
Communication timeout
---------------------
Ranks exchange information during NEFF loading and before the start of
the execution. The loading/execution cannot move forward until all ranks
are ready.
::
Timeout waiting for RX (waited 120 sec) - retrying
::
Timeout waiting for incoming connection (waited 120 sec) - retrying
::
Connect to localhost:33666 failed - retrying
.. _solution-7:
Solution
^^^^^^^^
The communication timeouts are not fatal. The ranks will continue
waiting forever. In most case the timeouts are caused by one of the
ranks getting delayed, usually be recompilation of a graph. The
execution is resumed after the graph is compiled (might take significant
amount of time). It is possible to determine if compilation is in
progress by checking the logs on all nodes.
Communication timeouts might also indicate that one of the nodes or
ranks is hang. If that is the case, terminate the run and restart from
the last known good check point.
.. _communication-errors:
Communication errors.
---------------------
::
RX, connection closed by remote peer
There could be other similar messages indicating that ranks failed to
communicate.
.. _solution-8:
Solution
^^^^^^^^
One of the ranks or nodes encountered a problem and terminated.
Terminate the run and restart from the last known check point.
.. _efa-kernel-messages-dmesg-after-process-termination:
EFA Kernel messages (dmesg) after process termination.
------------------------------------------------------
::
[298850.502143] neuron:npid_detach: neuron:npid_detach: pid=90193, slot=0
[298850.919248] efa 0000:a0:1a.0 rdmap160s26: Failed to process command DEREG_MR (opcode 8) comp_status 7 err -22
.. _solution-9:
Solution
^^^^^^^^
When a process that executed Collective Communication terminates it
deregisters buffers that were registered with the networking stack.
There is a race condition because the Neuron driver deregisters buffers
owned by terminating process as part of the memory cleanup. The error is
benign and will be removed in the future releases.
Failure to find bootstrap interface
-----------------------------------
::
No interface found in the same subnet as remote address fe80::1461:22ff:fe33:b471<45015>
No usable listening interface found
.. _solution-10:
Solution
^^^^^^^^
Bootstrap code incorrectly trying to use link-local IPv6 address for
communication. This error will be fixed in the next Neuron release. In
the meantime, as a workaround, disable IPv6 on the instances.
::
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
Name resolution failure
-----------------------
.. code:: bash
WARN Invalid NCCL_COMM_ID [compute1-st-kaena-training-0-1.pcluster-trn1-24-pdx80-2n.pcluster:41211], please use format: <ipv4>:<port> or [<ipv6>]:<port>
.. _solution-11:
Solution
^^^^^^^^
Verify that the name can be resolved by DNS by using nslookup or dig. Currently released version fails to resolve FQDN longer than 63 characters. This error will be fixed in the upcoming Neuron release. In the mean time use shorter names to ensure that FQDN length does not exceed the maximum of 63 characters.
Usage of Neuron Custom C++ Operators
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Neuron Runtime timeout or GPSIMD exception
------------------------------------------
At this point, reset of Neuron Runtime is required after running a model which
invoked a Neuron Custom C++ operator. Otherwise, a Neuron Runtime timeout or
GPSIMD exception may occur.
Example Neuron Runtime timeout:
::
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:1)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:2)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:3)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:4)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_tpb_status_notifications Missing infer_status notification: (end:0)
2023-Jan-09 20:27:41.0593 15042:15042 ERROR TDRV:exec_consume_infer_status_notifications (FATAL-RT-UNDEFINED-STATE) inference timeout (600000 ms) on Neuron Device 0 NC 0, waiting for execution completion notification
2023-Jan-09 20:27:41.0600 15042:15042 ERROR NMGR:dlr_infer Inference completed with err: 5
Example GPSIMD exception:
::
2023-Jan-06 22:28:01.0845 137472:137472 ERROR TDRV:pool_stdio_queue_consume_all_entries Printing stderr from GPSIMD:
GPSIMD EXCEPTION OCCURRED: ILLEGAL INSTRUCTION
Subtype/Type/Cause: 0x201
Exception PC: 0x840001E8
Solution
''''''''
If either of the above errors are seen, and ``NEURON_RT_RESET_CORES`` is set to
0, either unset it or set it to 1. This will enable the default runtime
behaviour of resetting NeuronCores when initializing applications. See
:ref:`nrt-configuration` for more information.
Also note that the timeout period can be changed by setting
``NEURON_RT_EXEC_TIMEOUT``. See :ref:`nrt-configuration` for more information.
FI_EFA_FORK_SAFE
----------------
Older Linux (<5.15) kernels require environment variable FI_EFA_FORK_SAFE to be set to 1 for the libfabric to operate correctly. Specifically Amazon Linux 2 uses 5.10 kernel and requires the variable to be set.
When the variable is not set multi-node collective communication will be disabled. Intra-node collective communication is still possible. The following error message will be logged the first time a model containing collective communication is loaded:
::
Linux kernel 5.10 requires setting FI_EFA_FORK_SAFE=1 environment variable. Multi-node support will be disabled.
Please restart with FI_EFA_FORK_SAFE=1 set."
</pre></body></html>
|
2023-09-29T20:54:56.997Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/index.rst.txt
|
```
.. _neuron_runtime:
Neuron Runtime
==============
Neuron runtime consists of kernel driver and C/C++ libraries which provides APIs to access Inferentia and Trainium Neuron devices. The Neuron ML frameworks plugins for TensorFlow, PyTorch and Apache MXNet use the Neuron runtime to load and run models on the NeuronCores. Neuron runtime loads compiled deep learning models, also referred to as Neuron Executable File Format (NEFF) to the Neuron devices and is optimized for high-throughput and low-latency.
.. toctree::
:maxdepth: 1
:hidden:
/neuron-runtime/api-reference-guide
.. toctree::
:maxdepth: 1
:hidden:
/neuron-runtime/configuration-guide
.. toctree::
:maxdepth: 1
:hidden:
Misc </neuron-runtime/misc-runtime>
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`Runtime API <nrt-api-guide>`
.. dropdown:: Configuration Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`Runtime Configuration <nrt-configuration>`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`Troubleshooting on Inf1 and Trn1 <nrt-troubleshooting>`
* :ref:`FAQ <neuron-runtime-faq>`
* :ref:`neuron-runtime-rn`
* :ref:`neuron-driver-release-notes`
* :ref:`neuron-collectives-rn`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_runtime:
Neuron Runtime
==============
Neuron runtime consists of kernel driver and C/C++ libraries which provides APIs to access Inferentia and Trainium Neuron devices. The Neuron ML frameworks plugins for TensorFlow, PyTorch and Apache MXNet use the Neuron runtime to load and run models on the NeuronCores. Neuron runtime loads compiled deep learning models, also referred to as Neuron Executable File Format (NEFF) to the Neuron devices and is optimized for high-throughput and low-latency.
.. toctree::
:maxdepth: 1
:hidden:
/neuron-runtime/api-reference-guide
.. toctree::
:maxdepth: 1
:hidden:
/neuron-runtime/configuration-guide
.. toctree::
:maxdepth: 1
:hidden:
Misc </neuron-runtime/misc-runtime>
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`Runtime API <nrt-api-guide>`
.. dropdown:: Configuration Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`Runtime Configuration <nrt-configuration>`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`Troubleshooting on Inf1 and Trn1 <nrt-troubleshooting>`
* :ref:`FAQ <neuron-runtime-faq>`
* :ref:`neuron-runtime-rn`
* :ref:`neuron-driver-release-notes`
* :ref:`neuron-collectives-rn`
</pre></body></html>
|
2023-09-29T20:54:57.077Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/faq.rst.txt
|
```
.. _neuron-runtime-faq:
Neuron runtime FAQ
==================
.. contents:: Table of Contents
:local:
:depth: 1
Where can I find information about Neuron Runtime 2.x (``libnrt.so``)
---------------------------------------------------------------------
See :ref:`introduce-libnrt` for detailed information about Neuron Runtime 2.x (``libnrt.so``).
What will happen if I will upgrade Neuron Framework without upgrading latest kernel mode driver?
------------------------------------------------------------------------------------------------
Application start would fail with the following error message:
.. code:: bash
2021-Aug-11 19:18:21.0661 24616:24616 ERROR NRT:nrt_init This runtime requires Neuron Driver version 2.0 or greater. Please upgrade aws-neuron-dkms package.
Do I need to recompile my model to use the Runtime Library?
-----------------------------------------------------------
No. Runtime 2.x supports all the models compiled with Neuron Compiler 1.x.
Do I need to change my application launch command?
--------------------------------------------------
No.
How do I restart/start/stop the Neuron Runtime?
-----------------------------------------------
Since Neuron Runtime is a library, starting/stopping application would result in starting/stopping the Neuron Runtime.
How do I know which runtimes are associated with which Neuron Device(s)?
------------------------------------------------------------------------
`neuron-ls` and `neuron-top` can be used to find out applications using Neuron Devices.
What about RedHat or other versions of Linux and Windows?
--------------------------------------------------------
We don't officially support it yet.
How can I take advantage of multiple NeuronCores to run multipleinferences in parallel?
---------------------------------------------------------------------------------------
Examples of this for TensorFlow and MXNet are found
:ref:`here <tensorflow-tutorials>` and :ref:`here <mxnet-tutorials>`.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-runtime-faq:
Neuron runtime FAQ
==================
.. contents:: Table of Contents
:local:
:depth: 1
Where can I find information about Neuron Runtime 2.x (``libnrt.so``)
---------------------------------------------------------------------
See :ref:`introduce-libnrt` for detailed information about Neuron Runtime 2.x (``libnrt.so``).
What will happen if I will upgrade Neuron Framework without upgrading latest kernel mode driver?
------------------------------------------------------------------------------------------------
Application start would fail with the following error message:
.. code:: bash
2021-Aug-11 19:18:21.0661 24616:24616 ERROR NRT:nrt_init This runtime requires Neuron Driver version 2.0 or greater. Please upgrade aws-neuron-dkms package.
Do I need to recompile my model to use the Runtime Library?
-----------------------------------------------------------
No. Runtime 2.x supports all the models compiled with Neuron Compiler 1.x.
Do I need to change my application launch command?
--------------------------------------------------
No.
How do I restart/start/stop the Neuron Runtime?
-----------------------------------------------
Since Neuron Runtime is a library, starting/stopping application would result in starting/stopping the Neuron Runtime.
How do I know which runtimes are associated with which Neuron Device(s)?
------------------------------------------------------------------------
`neuron-ls` and `neuron-top` can be used to find out applications using Neuron Devices.
What about RedHat or other versions of Linux and Windows?
--------------------------------------------------------
We don't officially support it yet.
How can I take advantage of multiple NeuronCores to run multipleinferences in parallel?
---------------------------------------------------------------------------------------
Examples of this for TensorFlow and MXNet are found
:ref:`here <tensorflow-tutorials>` and :ref:`here <mxnet-tutorials>`.
</pre></body></html>
|
2023-09-29T20:54:57.094Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/nrt-api-guide.rst.txt
|
```
.. _nrt-api-guide:
Developer's Guide - Neuron Runtime
==================================
.. contents:: Table of contents
:local:
:depth: 3
Introduction
------------
This guide is intended to support a deeper understanding of the Neuron Runtime and how ML applications are built using the Runtime APIs directly.
Most customers will not need this level of detail as the interactions with the Neuron Runtime are already taken care by popular ML Frameworks with built-in Neuron support
such as torch-neuron and tensorflow-neuron.
This guide is focused on the information you need to know when building custom frameworks that will call libnrt APIs directly from C/C++ apps.
.. note::
The next few paragraphs provide a brief introduction to the Neuron hardware and the Neuron Runtime architecture. Customers who'd rather skip this and jump straight to building their first ML
application which runs without the aid of an ML framework, should go to :ref:`first_app`.
The Neuron Runtime Library (libnrt) is the intermediate layer between Application + Framework and Neuron Driver + Neuron Device.
It provides a C API for initializing the Neuron hardware, staging models and input data, executing inferences and training iterations on the staged models, and retrieving output data. The vast majority of ML applications running on Neuron will follow one of the following 3 architectural templates:
.. figure:: ../images/neuron-rt-diagram.png
`Individual processes executing models on one or more Neuron Devices`
.. figure:: ../images/neuron-rt-diagram-2.png
`Processes working together on executing models within the same instance - libnccom (The Neuron Collective Communication Library) handles inter-worker communication`
.. figure:: ../images/neuron-rt-diagram-3.png
`Processes working together on executing models across multiple instances - libnccom, libfabric and the EFA driver handle communication`
.. _reqs:
Required Software
-----------------
A more comprehensive guide to installing Neuron software can be found in the :ref:`torch_quick_start` guide.
The Neuron Runtime requires the Neuron Driver, which is provided by the ``aws-neuron-dkms`` package:
AL2:
.. code-block:: bash
sudo yum install aws-neuronx-dkms
Ubuntu:
.. code-block:: bash
sudo apt-get install aws-neuronx-dkms
The Runtime Library consists of the libnrt.so and header files. These artifacts are version controlled and installed via the ``aws-neuronx-runtime-lib`` package. After installing the package, the binary (``libnrt.so``) is found in
``/opt/aws/neuron/lib`` and the needed header files are found in ``/opt/aws/neuron/include``:
AL2:
.. code-block:: bash
sudo yum install aws-neuronx-runtime-lib
Ubuntu:
.. code-block:: bash
sudo apt-get install aws-neuronx-runtime-lib
For applications that use distributed training or distributed inferences, the Neuron Collective Communication Library is required:
AL2:
.. code-block:: bash
sudo yum install aws-neuronx-collectives
Ubuntu:
.. code-block:: bash
sudo apt-get install aws-neuronx-collectives
In case of multi-instance training, the EFA driver and the Libfabric library - provided by the EFA installer - need to be installed as well:
AL2 & Ubuntu:
.. code-block:: bash
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
wget https://efa-installer.amazonaws.com/aws-efa-installer.key && gpg --import aws-efa-installer.key
cat aws-efa-installer.key | gpg --fingerprint
wget https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz.sig && gpg --verify ./aws-efa-installer-latest.tar.gz.sig
tar -xvf aws-efa-installer-latest.tar.gz
cd aws-efa-installer && sudo bash efa_installer.sh --yes
cd
sudo rm -rf aws-efa-installer-latest.tar.gz aws-efa-installer
.. _insttypes:
Brief Introduction to Neuron Hardware
-------------------------------------
Neuron Machine Learning Accelerators (or Neuron Devices) are custom accelerators designed to efficiently execute Machine Learning workloads such as executing inference on a given model or running a distributed training job. Depending on the type of workload and its size, customers can opt for the following Neuron-equipped EC2 instances:
.. list-table::
:widths: 40 40 40 40 40
:header-rows: 1
* - Workload type
- Neuron Device Name
- Instance type(s)
- Devices Per Instance
- Availability
* - Inference
- Inferentia II (v3)
- inf2.xlarge, inf2.8xlarge
- 1
- Available Now!
* - Inference
- Inferentia II (v3)
- inf2.24xlarge
- 6
- Available Now!
* - Inference
- Inferentia II (v3)
- inf2.48xlarge
- 12
- Available Now!
* - Inference
- Inferentia (v1)
- inf1.xlarge, inf1.2xlarge
- 1
- Available Now!
* - Inference
- Inferentia (v1)
- inf1.6xlarge
- 4
- Available Now!
* - Inference
- Inferentia (v1)
- inf1.24xlarge
- 16
- Available Now!
* - Training
- Trainium (v2)
- trn1.2xlarge
- 1
- Available Now!
* - Training
- Trainium (v2)
- trn1.32xlarge
- 16
- Available Now!
Neuron Device
^^^^^^^^^^^^^
Each Neuron Device consists of multiple execution units - called NeuronCores, a high throughput device memory, PCIe interfaces to the host CPU and to the other Neuron Devices and other components, depending on the Neuron Device version.
To get the number of NeuronCores per Neuron Device, the amount of Neuron Device memory and the way devices are directly connected, use the ``neuron-ls`` tool:
.. code-block:: bash
neuron-ls --topology
instance-type: trn1.32xlarge
instance-id: i-0633517e496256bf8
+--------+--------+--------+---------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+---------------+---------+
| 0 | 2 | 32 GB | 12, 3, 4, 1 | 10:1c.0 |
| 1 | 2 | 32 GB | 13, 0, 5, 2 | 10:1d.0 |
| 2 | 2 | 32 GB | 14, 1, 6, 3 | a0:1c.0 |
| 3 | 2 | 32 GB | 15, 2, 7, 0 | a0:1d.0 |
| 4 | 2 | 32 GB | 0, 7, 8, 5 | 20:1b.0 |
| 5 | 2 | 32 GB | 1, 4, 9, 6 | 20:1c.0 |
| 6 | 2 | 32 GB | 2, 5, 10, 7 | 90:1b.0 |
| 7 | 2 | 32 GB | 3, 6, 11, 4 | 90:1c.0 |
| 8 | 2 | 32 GB | 4, 11, 12, 9 | 20:1d.0 |
| 9 | 2 | 32 GB | 5, 8, 13, 10 | 20:1e.0 |
| 10 | 2 | 32 GB | 6, 9, 14, 11 | 90:1d.0 |
| 11 | 2 | 32 GB | 7, 10, 15, 8 | 90:1e.0 |
| 12 | 2 | 32 GB | 8, 15, 0, 13 | 10:1e.0 |
| 13 | 2 | 32 GB | 9, 12, 1, 14 | 10:1b.0 |
| 14 | 2 | 32 GB | 10, 13, 2, 15 | a0:1e.0 |
| 15 | 2 | 32 GB | 11, 14, 3, 12 | a0:1b.0 |
+--------+--------+--------+---------------+---------+
Neuron Device Topology
* * * *
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 0 ]◄––►[ 1 ]◄––►[ 2 ]◄––►[ 3 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 4 ]◄––►[ 5 ]◄––►[ 6 ]◄––►[ 7 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 8 ]◄––►[ 9 ]◄––►[10 ]◄––►[11 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[12 ]◄––►[13 ]◄––►[14 ]◄––►[15 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
* * * *
|nd_v1|
NeuronCore
^^^^^^^^^^
The NeuronCore is the primary execution unit within the accelerator. Each NeuronCore contains several execution engines
(for different types of compute operations such as tensor-based, vector and scalar), DMA engines, and a local cache.
A NeuronCore can operate independently or together with other NeuronCores, depending on the nature of the workload and the way
a model is compiled and loaded to the NeuronCores in the accelerator. Each execution engine can access the cache and DRAM attached to the accelerator device.
The primary form of data movement between the host CPU and the accelerator device, as well as between the device DRAM and NeuronCores, is Direct Memory Access (DMA).
The use of DMA enables more efficient data movement.
The Neuron Runtime Architecture
-------------------------------
|nrt_arch|
Application Interface Layer (The ``libnrt`` API)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Application Interface Layer allows applications and frameworks to use the available Neuron Devices to run
inference or training workloads. A complete reference of the C interface can be found in :ref:`nrt_api`.
Monitoring and Profiling
^^^^^^^^^^^^^^^^^^^^^^^^
The Neuron Runtime is able to capture key execution metrics which can be read in real-time using ``neuron-monitor`` and
``neuron-top``. ``neuron-monitor`` allows forwarding those metrics to Cloudwatch or a Prometheus server, enabling fleet-wide
monitoring - for more on that please refer to the ``neuron-monitor`` usage guide :ref:`neuron-monitor-ug`.
Profiling an execution is another feature of the Neuron Runtime - which provides an API for starting and stopping profiling,
as well as saving the profile data to a file, which can be used by tools such as the Neuron Tensorboard. This API is
documented in :ref:`api_profile` section.
The NEFF format and NEFF Parser
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A NEFF (*N*euron *E*xecutable *F*ile *F*ormat) is a single file container for all the artifacts needed to execute a model on one or more NeuronCores.
A NEFF is the output of the Neuron Compiler (neuron-cc). It contains Neuron machine instructions, pseudo instructions (compiler-generated instructions
which are parsed and replaced with Neuron instructions by the Neuron Runtime when the model loads), tensor information, model parameters and other components
that support the model's execution on one or more NeuronCores.
Operators that are not supported by Neuron can be compiled into CPU-executable binary and included into the NEFF as well.
The contents of a NEFF can be shown by using ``neuron-packager`` tool (which will be released soon).
Usually there is only one subgraph (which is executed on a single NeuronCore) in a NEFF:
.. code-block:: bash
NEFF Nodes:
NODE Executor Name Variable Size Type Format Shape DataType TimeSeries
1 Neuron Core sg00
image:0 3259008 IN NHWC [1 3 552 984]
net_output:0 1323972 OUT NHWC [1 78 69 123] false
In this example, there is a single subgraph, one input and one output:
|nrt_neff_single|
Some NEFFs can have multiple subgraphs (which will be deployed by the runtime on separate NeuronCores) and multiple CPU operators, as exemplified below:
.. code-block:: bash
NEFF Nodes:
NODE Executor Name Variable Size Type Format Shape DataType TimeSeries
1 Neuron Core sg00
input:0 2 IN NHWC [1 1 1 1]
nn/relu1:0 2 OUT NHWC [1 1 1 1] false
1 Neuron Core sg01
nn/relu1:0 2 IN NHWC [1 1 1 1]
nn/relu2:0 2 OUT NHWC [1 1 1 1] false
2 CPU fused_3_layout_transform
layout_transform0:0 0 OUT []
4 CPU fused_2_nn_conv2d_nn_relu
constant0 2 IN [1 1 1 1] float16
nn.relu0:0 0 OUT []
5 CPU fused_1_layout_transform_copy
nn/relu3:0 0 OUT []
6 Neuron Core sg02
nn/relu3:0 2 IN NHWC [1 1 1 1]
nn/relu4:0 2 OUT NHWC [1 1 1 1] false
6 Neuron Core sg03
nn/relu4:0 2 IN NHWC [1 1 1 1]
nn/output:0 2 OUT NHWC [1 1 1 1] false
The output above can be summarized by the graph below:
|nrt_neff|
The nodes marked with dark blue are intermediate tensors that are handled internally by the Neuron Runtime.
The other blue nodes are inputs/outputs. The green colored box indicates the operator is executed on the NeuronCore while
the red color box indicates the execution is done on the CPU.
The NEFF layer in Neuron Runtime is responsible for parsing a NEFF, validating it, and translating pseudo instructions into hardware specific
instructions and DMA descriptors.
Graph Walker and CPU Node Executor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As shown in the previous section, a NEFF can contain one or more nodes. During execution, the Neuron Runtime Graph Walker executes each node
one by one and handles copying input and output between each of them. If a node needs to be executed by the CPU, then a corresponding library function, found
in a .so file in the NEFF, is dynamically loaded using ``dlopen()`` during model load and executed during model execution. Since this library function is executed in the calling
thread’s context, the workload can be efficiently parallelized using a multi-threaded approach.
In the example below, each invocation of ``nrt_execute()`` would take 23ms: the first CPU node takes 1ms, the NeuronCore execution takes 20ms and the second CPU node takes 2 ms,
so the total latency is 23ms and the throughput is 43 calls per second (1000/23).
|nrt_neff_s|
If multiple threads are used, subsequent executions would be pipelined inside the runtime, hence increasing the throughput in this case to ~50 (1000/20).
|nrt_neff_m|
User Mode Driver
^^^^^^^^^^^^^^^^
This is the lowest level component of the Neuron Runtime and handles programming the engines, managing memory,
creating DMA descriptors to move data from host and device, handling notifications etc.
Memory Management
~~~~~~~~~~~~~~~~~
The Neuron Runtime is responsible with managing Neuron Device and host memory for the running models. The application is responsibile with
deallocating every loaded model and allocated tensor so the proper deallocation method needs to be called.
For more details, refer to :ref:`nrt_api` documentation.
Tools such as ``neuron-top`` and ``neuron-monitor`` can be used to determine the amount of memory being used at any given time.
.. _first_app:
Building the first Neuron application
-------------------------------------
The simple application presented here will load a NEFF file, use the provided binary files' contents as input tensors
(if a file wasn't provided for an input tensor, that input tensor will be zero-filled), and save the output tensors as
binary files.
Prerequisites
^^^^^^^^^^^^^
Building the application requires:
* a recent version of GCC
* installing the ``aws-neuronx-runtime-lib`` package as described in :ref:`reqs`
Running the built application requires:
* a Neuron-equipped instance as shown in :ref:`insttypes`
* installing the ``aws-neuronx-runtime-lib`` and the ``aws-neuronx-dkms`` package as described in :ref:`reqs`
* a NEFF file
Getting a NEFF file
^^^^^^^^^^^^^^^^^^^
When running any workload through a Neuron framework, the compiled NEFFs will be placed in ``/var/tmp/neuron-compile-cache``.
Additionally, setting the ``NEURON_FRAMEWORK_DEBUG`` environment variable to ``1`` before running the workload will enable
the compiled NEFFs to be written to the current directory.
The Code
^^^^^^^^
.. code-block:: c
#include <stdbool.h>
#include <nrt/nrt.h>
#include <nrt/nrt_experimental.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
#include <errno.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <pthread.h>
#include <fcntl.h>
#include <stdint.h>
#include <unistd.h>
// Function to mmap a file in the application's memory space,
// it will return a pointer to the mmapped memory and the size
// of the mmapped data will be written to *size
void *mmap_file(const char *filepath, size_t *size) {
struct stat sb;
int fd = open(filepath, O_RDONLY);
if (fd < 0 || fstat(fd, &sb) != 0) {
fprintf(stderr, "Unable to open %s: %s\n", filepath, strerror(errno));
return MAP_FAILED;
}
*size = sb.st_size;
return mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
}
#define P_ERR(...) fprintf(stderr, __VA_ARGS__)
#define CHECK_RESULT(res, expected, ...) \
if (res != expected) { \
fprintf(stderr, __VA_ARGS__); \
exit(-1); \
}
// struct used to load input tensors from files
typedef struct {
char *name;
size_t size;
void *data;
} input_tensor_info_t;
// simple container for input_tensor_info_t
typedef struct {
input_tensor_info_t *entries;
int entry_count;
} input_tensor_info_array_t;
// Allocate tensorsets and tensors based on the info_array and returns a valid tensorset in out_tset
// containing all the newly allocated tensors
NRT_STATUS allocate_tensors(nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type, nrt_tensor_set_t **out_tset) {
NRT_STATUS result;
int tensor_idx;
nrt_tensor_info_t *tensor_info = NULL;
nrt_tensor_t *tensor = NULL;
// We allocate a nrt_tensor_set which acts as a containers for nrt_tensors
result = nrt_allocate_tensor_set(out_tset);
if (result != NRT_SUCCESS) {
P_ERR("Couldn't allocate %s tensorset\n", usage_type == NRT_TENSOR_USAGE_INPUT ? "input" : "output");
}
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
tensor_info = &info_array->tensor_array[tensor_idx];
if (tensor_info->usage != usage_type) {
continue;
}
// Allocate the tensor with the name and size found in tensor_info_array
result = nrt_tensor_allocate(NRT_TENSOR_PLACEMENT_DEVICE, 0, tensor_info->size,
tensor_info->name, &tensor);
if (result != NRT_SUCCESS) {
P_ERR("Couldn't allocate tensor %s\n", tensor_info->name);
return result;
}
// Finally add the tensors to the newly allocated tensor set
result = nrt_add_tensor_to_tensor_set(*out_tset, tensor_info->name, tensor);
if (result != NRT_SUCCESS) {
P_ERR("Couldn't add tensor %s to tensorset\n", tensor_info->name);
return result;
}
}
return NRT_SUCCESS;
}
// Tensor iterator handler - returns false if the iteration needs to stop
typedef bool (*tensor_handler)(nrt_tensor_t *, nrt_tensor_info_t *, NRT_STATUS *, void *);
// Iterates through all the tensors in the given tensorset, based on the data in info_array for the given usage_type
// and calls the handler function with the provided args pointer
// Will return the first error returned by a handler
NRT_STATUS iterate_tensors(nrt_tensor_set_t *tset, nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type,
tensor_handler handler, void *args) {
NRT_STATUS result = NRT_SUCCESS;
NRT_STATUS final_result = NRT_SUCCESS;
int tensor_idx;
nrt_tensor_info_t *tensor_info = NULL;
nrt_tensor_t *tensor = NULL;
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
tensor_info = &info_array->tensor_array[tensor_idx];
if (tensor_info->usage != usage_type) {
continue;
}
result = nrt_get_tensor_from_tensor_set(tset, tensor_info->name, &tensor);
if (result != NRT_SUCCESS) {
P_ERR("Tensor %s not found in tensor set\n", tensor_info->name);
continue;
}
result = NRT_SUCCESS;
if ((*handler)(tensor, tensor_info, &result, args) == false) {
return result;
}
if (final_result == NRT_SUCCESS && result != final_result) {
final_result = result;
}
}
return final_result;
}
// Tensor iteration handler that checks if a tensor has an input file associated with it
// based on the CLI args
bool handler_load_inputs(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
NRT_STATUS res;
int idx;
input_tensor_info_array_t *info_array = (input_tensor_info_array_t *)args;
bool input_found = false;
for (idx = 0; idx < info_array->entry_count; idx++) {
if (strcmp(info_array->entries[idx].name, tensor_info->name) != 0) {
continue;
}
if (info_array->entries[idx].size != tensor_info->size) {
P_ERR("Input file for tensor %s has incorrect size %lu, expected %lu\n",
tensor_info->name, info_array->entries[idx].size, tensor_info->size);
break;
}
res = nrt_tensor_write(tensor, info_array->entries[idx].data, 0, tensor_info->size);
if (res != NRT_SUCCESS) {
P_ERR("Unable to write content to input tensor %s\n", tensor_info->name);
} else {
input_found = true;
}
}
if (!input_found) {
fprintf(stderr, "Input tensor %s will be zero-filled\n", tensor_info->name);
}
*result = NRT_SUCCESS;
return true;
}
// Tensor iteration handler that saves outputs
bool handler_save_outputs(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
static char filename[280];
int fd;
// Allocating a buffer large enough to read the entire tensor
void *tensor_data = malloc(tensor_info->size);
*result = NRT_SUCCESS;
if (tensor_data == NULL) {
fprintf(stderr, "Unable to allocate memory for saving output tensor %s\n", tensor_info->name);
*result = NRT_FAILURE;
return true;
}
// Reading the tensor to the newly allocated buffer
*result = nrt_tensor_read(tensor, tensor_data, 0, tensor_info->size);
if (*result != NRT_SUCCESS) {
fprintf(stderr, "Unable to read tensor %s\n", tensor_info->name);
free(tensor_data);
return true;
}
// Saving the tensor to a file
snprintf(filename, 280, "%s.out", tensor_info->name);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0644);
if (fd < 0) {
fprintf(stderr, "Unable to open %s for writing\n", filename);
free(tensor_data);
*result = NRT_FAILURE;
return true;
}
if (write(fd, tensor_data, tensor_info->size) != tensor_info->size) {
*result = NRT_FAILURE;
fprintf(stderr, "Unable to write tensor %s contents to file %s\n", tensor_info->name, filename);
}
close(fd);
free(tensor_data);
return true;
}
// Tensor iteration handler that deallocates tensors
bool handler_free_tensor(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
*result = NRT_SUCCESS;
nrt_tensor_free(&tensor);
return true;
}
int main(int argc, char *argv[]) {
NRT_STATUS result;
int idx = 0;
int tensor_idx = 0;
void *neff_data = NULL;
size_t neff_size = 0;
void *input_data = NULL;
input_tensor_info_array_t input_tensor_info_array = {0};
input_tensor_info_t *current_input = NULL;
nrt_model_t *model = NULL;
nrt_tensor_set_t *inputs = NULL;
nrt_tensor_set_t *outputs = NULL;
nrt_tensor_t *tensor = NULL;
nrt_tensor_info_array_t *tensor_info_array = NULL;
if (argc < 2) {
fprintf(stderr, "Incorrect number of args, usage: exec_test file.neff [input_1_name] [input_1_file] ...\n");
exit(-1);
}
// Try mmapping the NEFF file first, so we can fail fast if not found or
// mmap fails
neff_data = mmap_file(argv[1], &neff_size);
if (neff_data == MAP_FAILED) {
fprintf(stderr, "Unable to map file %s\n", argv[1]);
exit(-1);
}
// mmap input tensor files (if any provided) and fill the input_tensor_info array
if (argc > 3) {
input_tensor_info_array.entries = malloc((argc - 2 / 2) * sizeof(input_tensor_info_t));
for (idx = 2; idx < argc; idx += 2) {
if (idx + 1 >= argc) {
break;
}
current_input = &input_tensor_info_array.entries[input_tensor_info_array.entry_count];
input_data = mmap_file(argv[idx + 1], ¤t_input->size);
if (input_data == MAP_FAILED) {
fprintf(stderr, "Unable to mmap inputs file %s\n", argv[idx + 1]);
continue;
}
current_input->name = argv[idx];
current_input->data = input_data;
input_tensor_info_array.entry_count++;
}
}
// Before calling any nrt API, nrt_init must be called
// Since this is not running as part of a framework, the correct parameter for 'framework' is
// NRT_FRAMEWORK_TYPE_NO_FW and the others can be empty strings
result = nrt_init(NRT_FRAMEWORK_TYPE_NO_FW, "", "");
CHECK_RESULT(result, NRT_SUCCESS, "NRTLIB could not be initialized, error: %d\n", (int)result);
// Loading the NEFF
printf("Loading NEFF\n");
result = nrt_load(neff_data, neff_size, -1, -1, &model);
CHECK_RESULT(result, NRT_SUCCESS, "Unable to load NEFF\n");
// In order to allocate tensors, first we need to call nrt_get_model_tensor_info which
// will give us the model tensors' names and sizes in tensor_info_array
printf("Getting IO tensor information\n");
result = nrt_get_model_tensor_info(model, &tensor_info_array);
CHECK_RESULT(result, NRT_SUCCESS, "Unable to get model tensor information\n");
// Allocating tensors
printf("Creating I/O data (%ld tensors)\n", tensor_info_array->tensor_count);
result = allocate_tensors(tensor_info_array, NRT_TENSOR_USAGE_INPUT, &inputs);
CHECK_RESULT(result, NRT_SUCCESS, "Error allocating input tensors\n");
result = allocate_tensors(tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, &outputs);
CHECK_RESULT(result, NRT_SUCCESS, "Error allocating input tensors\n");
// Loading input files (if provided)
iterate_tensors(inputs, tensor_info_array, NRT_TENSOR_USAGE_INPUT, handler_load_inputs,
(void*) &input_tensor_info_array);
// Executing model using the tensors in the inputs tensorset and writing the outputs to the tensors
// in the outputs tensorset
result = nrt_execute(model, inputs, outputs);
CHECK_RESULT(result, NRT_SUCCESS, "Error during model execution: %d\n", result);
// Saving outputs to files
result = iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_save_outputs, NULL);
if (result != NRT_SUCCESS) {
P_ERR("Error saving outputs to files\n");
}
// Unloading the model
result = nrt_unload(model);
if (result != NRT_SUCCESS) {
P_ERR("Unable to unload NEFF\n");
}
printf("Freeing tensors\n");
iterate_tensors(inputs, tensor_info_array, NRT_TENSOR_USAGE_INPUT, handler_free_tensor, NULL);
iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_free_tensor, NULL);
nrt_destroy_tensor_set(&inputs);
nrt_destroy_tensor_set(&outputs);
printf("Deallocating model tensor info\n");
// We are done with the tensor_info_array, we can dispose of it
nrt_free_model_tensor_info(tensor_info_array);
printf("Deallocating inputs tensor info\n");
// Unmapping the input files
for (tensor_idx = 0; tensor_idx < input_tensor_info_array.entry_count; tensor_idx++) {
munmap(input_tensor_info_array.entries[tensor_idx].data, input_tensor_info_array.entries[tensor_idx].size);
}
if (input_tensor_info_array.entries) {
free(input_tensor_info_array.entries);
}
// Clean-up the runtime
printf("Cleaning up the runtime\n");
nrt_close();
printf("DONE\n");
}
Building the example:
.. code-block:: bash
gcc run_neff.c -o run_neff -lnrt -pthread -I/opt/aws/neuron/include -L/opt/aws/neuron/lib
Running the example:
.. code-block:: bash
./run_neff my.neff [input_1] [input_1.bin] [input_2] [input_2.bin] ...
Code Breakdown
^^^^^^^^^^^^^^
Initialization and cleanup
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
// ...
result = nrt_init(NRT_FRAMEWORK_TYPE_NO_FW, "", "");
// ...
nrt_close();
The Neuron Runtime is initialized by calling ``nrt_init`` and all applications should call ``nrt_close`` once they're done
using it. For more details on these functions, go to the :ref:`api_init` section.
Loading the NEFF
~~~~~~~~~~~~~~~~
Once the contents of a NEFF file have been mapped to virtual memory using mmap ...
.. code-block:: c
// ...
void *mmap_file(const char *filepath, size_t *size) {
struct stat sb;
int fd = open(filepath, O_RDONLY);
if (fd < 0 || fstat(fd, &sb) != 0) {
fprintf(stderr, "Unable to open %s: %s\n", filepath, strerror(errno));
return MAP_FAILED;
}
*size = sb.st_size;
return mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
}
// ...
neff_data = mmap_file(argv[1], &neff_size);
... the NEFF is loaded using ``nrt_load``. The runtime will decide the optimal placement for the model - it will
choose the best NeuronCore on which to deploy the model:
.. code-block:: c
// ...
result = nrt_load(neff_data, neff_size, -1, -1, &model);
// ...
The call will return a valid model handle in ``nrt_model_t*`` which will subsequently be
used for other calls to the Runtime API (such as ``nrt_execute``).
For more details on the model API (including ``nrt_load``), go to the :ref:`api_model` section.
Creating input/output tensors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The main container for tensors is the ``nrt_tensor_set_t*``. Tensors (``nrt_tensor_t*``) are not passed directly to the NEFF execution function, ``nrt_execute``,
they have to be wrapped in a ``nrt_tensor_set_t*``. The ``allocate_tensors`` function will allocate the tensorset and the tensors for the requested usage type
(``NRT_TENSOR_USAGE_INPUT`` or ``NRT_TENSOR_USAGE_OUTPUT``) and return the tensorset containing the allocated tensors in ``out_tset``.
.. code-block:: c
NRT_STATUS allocate_tensors(nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type, nrt_tensor_set_t **out_tset) {
// ...
// We allocate a nrt_tensor_set which acts as a containers for nrt_tensors
result = nrt_allocate_tensor_set(out_tset);
// ...
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
tensor_info = &info_array->tensor_array[tensor_idx];
if (tensor_info->usage != usage_type) {
continue;
}
// ...
// Allocate the tensor with the name and size found in tensor_info_array
result = nrt_tensor_allocate(NRT_TENSOR_PLACEMENT_DEVICE, 0, tensor_info->size,
tensor_info->name, &tensor);
// ...
// Finally add the tensors to the newly allocated tensor set
result = nrt_add_tensor_to_tensor_set(*out_tset, tensor_info->name, tensor);
// ...
}
// ...
}
Iterating through tensors in an nrt_tensor_set_t
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A helper function, ``iterate_tensors`` is used to iterate through the ``nrt_tensor_t`` in a tensorset and call the function
``handler`` for each of them. If the handler function returns ``false`` iteration ends. ``iterate_tensors`` returns the first error
reported by the handler function.
.. code-block:: c
// Tensor iterator handler - returns false if the iteration needs to stop
typedef bool (*tensor_handler)(nrt_tensor_t *, nrt_tensor_info_t *, NRT_STATUS *, void *);
NRT_STATUS iterate_tensors(nrt_tensor_set_t *tset, nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type,
tensor_handler handler, void *args) {
// ...
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
// ...
result = nrt_get_tensor_from_tensor_set(tset, tensor_info->name, &tensor);
// ...
if ((*handler)(tensor, tensor_info, &result, args) == false) {
return result;
}
// ...
}
Deallocating input/output tensors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After the execution is complete, the tensors are deallocated using ``iterate_tensors`` and the tensorsets are deallocated
using ``nrt_destroy_tensor_set``:
.. code-block:: c
iterate_tensors(inputs, tensor_info_array, NRT_TENSOR_USAGE_INPUT, handler_free_tensor, NULL);
iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_free_tensor, NULL);
nrt_destroy_tensor_set(&inputs);
nrt_destroy_tensor_set(&outputs);
The ``handler_free_tensor`` function simply deallocates the given tensor:
.. code-block:: c
bool handler_free_tensor(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
// ...
nrt_tensor_free(&tensor);
// ...
}
For more details on the tensor API, check out the :ref:`api_tensor` and the :ref:`api_tensorset` sections.
Executing the NEFF
~~~~~~~~~~~~~~~~~~
The NEFF is executed using a call to ``nrt_execute``. If ``nrt_execute`` completes successfully, the output tensors are
read and saved to files (one binary file per output tensor) using ``iterate_tensors``:
.. code-block:: c
// Executing model using the tensors in the inputs tensorset and writing the outputs to the tensors
// in the outputs tensorset
result = nrt_execute(model, inputs, outputs);
// ...
// Saving outputs to files
result = iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_save_outputs, NULL);
The iteration handler reads the tensor data and writes it to a file with the same name as the tensor:
.. code-block:: c
bool handler_save_outputs(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
// ...
void *tensor_data = malloc(tensor_info->size);
// ...
// Reading the tensor to the newly allocated buffer
*result = nrt_tensor_read(tensor, tensor_data, 0, tensor_info->size);
// ...
// Saving the tensor to a file
snprintf(filename, 280, "%s.out", tensor_info->name);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0644);
// ...
if (write(fd, tensor_data, tensor_info->size) != tensor_info->size) {
// ...
}
close(fd);
For more details on the execution API, go to the :ref:`api_exec` section.
.. _nrt_api:
The LIBNRT API
------------------
API Return Codes
^^^^^^^^^^^^^^^^
All API calls will return an NRT_STATUS value representing the return status of the call. In case of an error, an error message
will also be logged (based on the logging settings, more on that in the next section). The table below contains all the possible error codes.
Please note that some error codes only apply to certain API calls.
.. list-table::
:widths: 40 260
:header-rows: 1
* - Return Code
- Error
* - ``NRT_SUCCESS``
- Call was successful
* - ``NRT_FAILURE``
- Generic failure
* - ``NRT_INVALID``
- Invalid NEFF, bad instruction, bad DMA descriptor, input tensor name/size does not match the model, etc.
* - ``NRT_INVALID_HANDLE``
- Invalid handle (e.g. an invalid model handle)
* - ``NRT_RESOURCE``
- Failed to allocate a resource for the requested operation
* - ``NRT_TIMEOUT``
- Operation timed out
* - ``NRT_HW_ERROR``
- Hardware failure
* - ``NRT_LOAD_NOT_ENOUGH_NC``
- The number of available NeuronCores is insufficient for the requested operation
* - ``NRT_UNSUPPORTED_NEFF_VERSION``
- NEFF version unsupported
* - ``NRT_UNINITIALIZED``
- Returned when attempting an API call when the library is not initialized
* - ``NRT_CLOSED``
- Returned when attempting an API call after ``nrt_close()`` was called
* - ``NRT_EXEC_BAD_INPUT``
- Invalid input has been submitted to nrt_execute()
* - ``NRT_EXEC_COMPLETED_WITH_NUM_ERR``
- Execution completed with numerical errors (produced NaN)
* - ``NRT_EXEC_COMPLETED_WITH_ERR``
- Execution was completed with other errors, either logical (event double clear), or hardware (parity error)
* - ``NRT_EXEC_NC_BUSY``
- The neuron core is locked (in use) by another model/thread
.. _api_init:
Initialization, configuration and teardown
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_init(nrt_framework_type_t framework, const char *fw_version, const char *fal_version)
Initializes the Neuron Runtime’s internal state and the Neuron hardware’s state.
This should be called before any other nrt_* call is attempted - although a small set of functions
are exempt from this rule (for example ``nrt_get_total_nc_count`` and ``get_nrt_version``). Any call to the NRT
library API will return NRT_FAILURE if ``nrt_init`` has not been called beforehand and that API call requires it.
The runtime can be configured by setting the appropriate environment variable before this API call.
The list of available environment variables is found in the :ref:`api_config` section.
:param framework: Can be one of:
``NRT_FRAMEWORK_TYPE_INVALID, // Invalid framework
NRT_FRAMEWORK_TYPE_NO_FW, // No framework
NRT_FRAMEWORK_TYPE_TENSORFLOW, // Tensorflow
NRT_FRAMEWORK_TYPE_PYTORCH, // Pytorch
NRT_FRAMEWORK_TYPE_MXNET // Mxnet``
This argument is used by our Neuron Tools to determine the type of application running,
it has no other impact on the functioning of the runtime.
Application using a custom framework or calling the Neuron Runtime directly should use ``NRT_FRAMEWORK_TYPE_NO_FW``.
:param const char *fw_version: version of the framework on top of which this runtime is running
:param const char *fal_version: version of the framework adapter on top of which this runtime is running
Applications using `NRT_FRAMEWORK_TYPE_NO_FW` for the first argument should use two empty strings for the versions.
.. _api_config:
Environment variables used to configure the Runtime Library
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``NEURON_RT_LOG_LOCATION=<CONSOLE/SYSLOG>, default=CONSOLE``
Chooses the output target for the Neuron Runtime logs (either console or syslog).
``NEURON_RT_LOG_LEVEL=<ERROR/WARN/INFO/DEBUG/TRACE>, default=ERROR``
Specifies the logging verbosity for the Neuron Runtime library, from ERROR (least verbose), to TRACE (most verbose).
``NEURON_RT_NUM_CORES=<n>``
Specifies how many NeuronCores are needed for the application. During ``nrt_init`` the requested number of NeuronCores are **exclusively** associated with the calling processes and
become unavailable to any other process attempting to use them. If there aren't enough NeuronCores available, ``nrt_init`` will return an error. Once the owner process has called ``nrt_close``
or exited, the NeuronCores are released and become available to be associated with another process. By default, all NeuronCores present on the instance will be made available to the caller.
``NEURON_RT_VISIBLE_CORES=<m,n,p-q>``
Similarly to the previous, it allows the calling process to get exclusive access to a set of NeuronCores, but it allows explicitly specifying which NeuronCores are available for the application based on their zero-based indices.
This variable can be a list of NeuronCores, for example: ``NEURON_RT_VISIBLE_CORES=3,4,5,6``, a range of NeuronCores, for example: ``NEURON_RT_VISIBLE_CORES=3-6``, or a combination of both: ``NEURON_RT_VISIBLE_CORES=3-5,6``.
The resulting range must be contiguous, for example this is not valid: ``NEURON_RT_VISIBLE_CORES=3,5,6`` because 4 is missing from the list, and indices need to be provided in consecutive increasing order.
.. note::
If both ``NEURON_RT_VISIBLE_CORES`` are ``NEURON_RT_NUM_CORES`` are defined, ``NEURON_RT_VISIBLE_CORES`` will be used.
``NEURON_RT_ROOT_COMM_ID=<ip_address:port>``
Mandatory for applications that run workloads containing Collective Communication operators, allows specifying the IP address and assign a port for the rank 0 worker in the Collective Compute worker pool.
For example: ``NEURON_RT_ROOT_COMM_ID=10.0.1.2:46820``.
``NEURON_RT_STOCHASTIC_ROUNDING_SEED=<value>``
Allows setting a value for the stochastic rounding seed. Has no effect on inf1.
``NEURON_RT_DEBUG_MEMLOG_MAX_SIZE=<value>, default=1024*1024``
Allows changing the number of entries in the memory allocations log. This log contains an entry for every allocation and deallocation and will be dumped to a file in case of a memory allocation failure in CSV format.
.. c:function:: NRT_STATUS nrt_close()
Closes all the devices used by the application (as defined by ``NEURON_RT_NUM_CORES``/``NEURON_RT_VISIBLE_CORES``)
and cleans up the runtime state. Note that once ``nrt_close`` has been called, most nrt_* API calls will fail if attempted.
.. _api_model:
The Model API
^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_load(const void *neff_bytes, size_t size, int32_t start_nc, int32_t nc_count, nrt_model_t **model)
Loads a NEFF file whose content is found in `neff_bytes`, with the given size, placing it on ``nc_count`` NeuronCores starting with NeuronCore index `start_nc`.
If either ``nc_count`` or ``start_nc`` are -1, an optimal value for each will be determined automatically. The model can be configured using a list of environment
variables read inside this API call which can be found in the :ref:`model_env` section. It returns a handle to the loaded model in the ``nrt_model_t*``
pointer if the call succeeds. The returned handle represents the loaded model and can be used with calls that operate on an ``nrt_model_t*`` (such as ``nrt_execute``).
:param neff_bytes: Pointer to existing NEFF file data
:param size: Size of data in ``neff_bytes``
:param start_nc: Index of the NeuronCore on which to stage the model. The first NeuronCore owned by the application will always have the index ``0`` - for example, even if when setting ``NEURON_RT_VISIBLE_CORES=3,4``, the two NeuronCores will be referred to as ``0`` and ``1``. If -1, an optimal index will be automatically determined (based on current NeuronCore usage).
:param nc_count: Number of NeuronCores on which to stage the model. If its value is a multiple of the amount of NeuronCores needed by the model, the model will be replicated on the number of NeuronCores specified in the argument. This feature is called **TBD** and it will be explained in detail in a separate section. If its value is -1, the model will be staged a single time, using the number of cores needed by a single instance of the model.
:param model: Model handle returned by the call which can be passed to other functions that operate on models (such as ``nrt_execute``).
.. _model_env:
Environment variables used to configure a model being loaded
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``NEURON_RT_EXEC_TIMEOUT=<n>, default=30 (inf1), default=600(trn1,inf2)``
Maximum of time, in seconds, allowed for one execution before timing out - which will cause the call to ``nrt_execute`` to fail and return ``NRT_TIMEOUT``.
``NEURON_RT_VALIDATE_HASH=<true/false>, default=false``
Verify the integrity of NEFF data being loaded by checking against a checksum found in the header.
``NEURON_RT_STOCHASTIC_ROUNDING_EN=<true/false>, default=false``
Enable stochastic rounding.
.. c:function:: NRT_STATUS nrt_load_collectives(const void *neff_bytes, size_t size, int32_t start_nc, int32_t nc_count, uint32_t g_device_id, uint32_t g_device_count, nrt_model_t **model)
Same as ``nrt_load`` (same environment variables can be used to configure the model), but must be used when loading NEFFs containing Collective Communication operators. Uses the same arguments as `nrt_load`, but adds 2 extra ones.
:param neff_bytes: Pointer to existing NEFF file data
:param size: Size of data in ``neff_bytes``
:param start_nc: Index of NeuronCore on which to stage the model. If -1, an optimal index will be automatically determined (based on current NeuronCore usage).
:param nc_count: Number of NeuronCores on which to stage the model. If its value is a multiple of the amount of NeuronCores needed by the model, the model will be replicated on the number of NeuronCores specified in the argument. This feature is called **TBD** and it will be explained in detail in a separate section. If its value is -1, the model will be staged a single time, using the number of cores needed by a single instance of the model.
:param g_device_id: Globally unique ID within the Collective Communication world associated with this model instance.
:param g_device_count: Size of the Collective Communication world (total number of participating unique IDs).
:param model: Model handle returned by the call which can be passed to other functions that operate on models (such as ``nrt_execute``).
.. c:function:: NRT_STATUS nrt_unload(nrt_model_t *model)
Unloads the given model and frees up device and host resources.
:param model: Pointer to model to unload. All data associated with the model is deleted, do not reuse the pointer or try to deallocate it afterwards. Do not call ``nrt_unload`` again on the same ``nrt_model_t*`` pointer (think of it as a call to `free()`).
.. c:function:: NRT_STATUS nrt_get_model_nc_count(const nrt_model_t *model, uint32_t *nc_count)
Gets the number of NeuronCores used by the model and writes that value at the address pointed by ``nc_count``.
:param model: Valid pointer to an ``nrt_model_t``.
:param nc_count: If the call completes successfully, the pointed address will contain the number of NeuronCores used by the model.
.. c:function:: NRT_STATUS nrt_get_model_tensor_info(nrt_model_t *model, nrt_tensor_info_array_t **tensor_info)
Gets input/output tensor information for a given loaded model.
:param model: Valid pointer to an ``nrt_model_t``.
:param tensor_info: Pointer to a ``nrt_tensor_info_array_t*`` which will contain the tensor information data. The function allocates memory for the structure internally which can only be correctly freed by calling ``nrt_free_model_tensor_info``.
The ``nrt_tensor_info_array_t`` struct and its dependencies are defined as follows:
.. code-block:: c
typedef struct nrt_tensor_info_array {
uint64_t tensor_count; // Total number of input/output tensors used by the model
nrt_tensor_info_t tensor_array[]; // Array of tensor info representing those tensors
} nrt_tensor_info_array_t;
typedef struct nrt_tensor_info {
char name[NRT_TENSOR_NAME_MAX]; // Name of the tensor
nrt_tensor_usage_t usage; // Type of the tensor
size_t size; // Tensor size in bytes
nrt_dtype_t dtype; // Data type
uint32_t *shape; // An array representing data shape
uint32_t ndim; // The number of dimensions (number of elements in the shape array)
} nrt_tensor_info_t;
// Usage type definitions for tensors
typedef enum nrt_tensor_usage {
NRT_TENSOR_USAGE_INPUT = 0, // Tensor is used for input
NRT_TENSOR_USAGE_OUTPUT, // Tensor is used for output
} nrt_tensor_usage_t;
// Data type definitions for tensors
typedef enum nrt_dtype {
NRT_DTYPE_UNKNOWN = 0,
NRT_DTYPE_FLOAT32,
NRT_DTYPE_FLOAT16,
NRT_DTYPE_BFLOAT16,
NRT_DTYPE_INT8,
NRT_DTYPE_UINT8,
NRT_DTYPE_INT16,
NRT_DTYPE_UINT16,
NRT_DTYPE_INT32,
NRT_DTYPE_UINT32,
NRT_DTYPE_INT64,
NRT_DTYPE_UINT64
} nrt_dtype_t;
.. c:function:: NRT_STATUS nrt_free_model_tensor_info(nrt_tensor_info_array_t *tensor_info)
Frees a ``nrt_tensor_info_array_t`` allocated by a call to ``nrt_get_model_tensor_info``. As with all deallocation functions, don’t call it more than once on the same pointer.
:param tensor_info: ``nrt_tensor_info_array_t`` to deallocate.
.. c:function:: NRT_STATUS nrt_get_model_instance_count(nrt_model_t *model, uint32_t *instance_count)
Returns the number of times this `nrt_model_t `is currently staged on the NeuronDevice(s) by writing it to the address pointed by ``instance_count``. It will always be >= 1. This value can be used to determine the number of threads that can optimally call ``nrt_execute`` on this ``nrt_model_t``.
:param model: Valid pointer to an ``nrt_model_t``.
:param instance_count: If the call completes successfully, the address will contain the instance count for this model
.. _api_tensor:
The Tensor API
^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_tensor_allocate(nrt_tensor_placement_t tensor_placement, int logical_nc_id, size_t size, const char *name, nrt_tensor_t **tensor)
Allocates a new tensor, placing it in either host virtual memory or device memory (based on the ``tensor_placement`` argument), on the specified NeuronCore index, of a given size, and attaches the given name to it - the name is only used for log messages.
For applications running on Inferentia, ``tensor_placement`` should always be ``NRT_TENSOR_PLACEMENT_VIRTUAL``. For all other cases, ``NRT_TENSOR_PLACEMENT_DEVICE`` should be used. If successful, the ``tensor`` address will contain a valid pointer to the newly allocated ``nrt_tensor_t``.
:param tensor_placement: Controls where the tensor will be placed, the definition of the ``nrt_tensor_placement_t`` enum is as follows:
.. code-block:: c
typedef enum {
NRT_TENSOR_PLACEMENT_DEVICE, // the tensor is allocated directly in device memory
NRT_TENSOR_PLACEMENT_HOST, // the tensor is allocated in DMAable host memory (only for sizes < 4MB)
NRT_TENSOR_PLACEMENT_VIRTUAL // the tensor is allocated in host memory
} nrt_tensor_placement_t;
:param int logical_nc_id: Zero-based NeuronCore index on which to allocate the tensor (if ``tensor_placement`` is ``NRT_TENSOR_PLACEMENT_DEVICE``) or to which associate the tensor for all other cases.
:param size: Size for the new tensor.
:param name: Name for the new tensor.
:param tensor: If the call completes successfully, the address will contain a valid ``nrt_tensor_t*`` pointer.
.. c:function:: void nrt_tensor_free(nrt_tensor_t **tensor)
Frees a tensor allocated by a call to ``nrt_tensor_allocate`` and sets the nrt_tensor_t* pointer at address ``tensor`` to NULL.
:param tensor: Pointer to a pointer to a previously allocated nrt_model_t. After the call returns, the ``nrt_model_t*`` pointer will be NULL.
.. c:function:: NRT_STATUS nrt_tensor_read(const nrt_tensor_t *tensor, void *buf, size_t offset, size_t size)
Reads ``size`` bytes of data from a given tensor, starting at ``offset``, to ``buf`` starting at offset 0. ``buf`` needs to be allocated with a size of at least ``size`` bytes.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:param buf: Buffer where to write read data, it needs to be at least `size` bytes in size.
:param offset: Offset within the tensor from which to begin reading.
:param size: Size to read.
.. c:function:: NRT_STATUS nrt_tensor_write(nrt_tensor_t *tensor, const void *buf, size_t offset, size_t size)
Writes ``size`` bytes of data to a given tensor, starting at ``offset``, from ``buf`` (starting at offset 0).
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:param buf: Buffer containing ``size`` bytes of data to write to the tensor.
:param offset: Offset within the tensor from which to begin writing.
:param size: Size to write.
.. c:function:: size_t nrt_tensor_get_size(const nrt_tensor_t *tensor)
Returns the size, in bytes, of the given tensor.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:returns: Size in bytes of the given tensor.
.. c:function:: NRT_STATUS nrt_tensor_allocate_empty(const char *name, nrt_tensor_t **tensor)
Allocates an empty tensor, i.e. the tensor structure w/o any attached storage.
:param name: Name for the new tensor.
:param tensor: If the call completes successfully, the address will contain a valid ``nrt_tensor_t*`` pointer.
.. c:function:: NRT_STATUS nrt_tensor_attach_buffer(nrt_tensor_t *tensor, void *buffer, size_t size)
Attaches a caller-supplied buffer to a tensor. Any storage previously attached to the tensor is detached and freed if was owned by the tensor.
The attached buffer is managed by the caller and must persist through the entire lifetime of the tensor - calling `nrt_tensor_free` will not deallocate it.
This changes the memory placement of the nrt_tensor_t to ``NRT_TENSOR_PLACEMENT_VIRTUAL`` regardless of the initial memory placement type.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:param buffer: Buffer of ``size`` bytes to attach to the tensor.
:param size: Size of attached buffer.
.. c:function:: NRT_STATUS nrt_tensor_allocate_slice(const nrt_tensor_t *tensor_source, size_t offset, size_t size, const char *name, nrt_tensor_t **tensor_slice)
Allocates a new ``nrt_tensor_t`` that doesn’t have its own backing storage - instead, it will use a part (slice) of ``tensor_source``’s storage, starting at ``offset``
with the given size. The shared backing storage is reference counted and it will not be deallocated until the last tensor using it is deallocated.
:param tensor_source: Valid pointer to a ``nrt_tensor_t`` whose storage will be used by the new tensor.
:param offset: Offset within the ``tensor_source`` used as origin for the 'slice'.
:param size: Size of storage to be used by the new tensor.
:param name: Name for the new tensor.
:param tensor_slice: If the call completes successfully, the address will contain a valid, newly allocated, ``nrt_tensor_t*`` pointer.
.. c:function:: void *nrt_tensor_get_va(const nrt_tensor_t *tensor)
Returns the virtual address for an allocated tensor.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:returns: Pointer to host memory used by the tensor.
.. _api_tensorset:
The Tensorset API
~~~~~~~~~~~~~~~~~
Tensorsets are containers for tensors.
.. c:function:: NRT_STATUS nrt_allocate_tensor_set(nrt_tensor_set_t **result)
Allocates an empty ``nrt_tensor_set_t`` and places its address in ``result``.
:param result: If the call completes successfully, this address will contain a pointer to a valid, newly allocated ``nrt_tensor_set_t``.
.. c:function:: void nrt_destroy_tensor_set(nrt_tensor_set_t **tensor_set)
Frees a tensor set allocated by a call to ``nrt_allocate_tensor_set`` and sets the ``nrt_tensor_set_t*`` pointer at address ``tensor_set`` to NULL.
:param tensor_set: Pointer to a pointer to a previously allocated ``nrt_tensor_set_t``. After the call returns, the ``nrt_tensor_set_t*`` pointer will be NULL.
.. c:function:: NRT_STATUS nrt_add_tensor_to_tensor_set(nrt_tensor_set_t *tensor_set, const char *tensor_name, nrt_tensor_t *tensor)
Adds an ``nrt_tensor`` to a tensor_set under a given name. That name can be later used to retrieve the tensor.
:param tensor_set: Pointer to a valid Tensorset where to add the tensor.
:param tensor_name: Name that will be used to access the added tensor in the container. Does not need to be the same as the ``nrt_tensor_t``’s name.
:param tensor: Pointer to a valid ``nrt_tensor_t`` to ad to the Tensorset.
.. c:function:: NRT_STATUS nrt_get_tensor_from_tensor_set(nrt_tensor_set_t *tensor_set, const char *tensor_name, nrt_tensor_t **tensor)
Gets an ``nrt_tensor`` from the tensor set based on the name used when it was added by ``nrt_add_tensor_to_tensor_set`` and places its address
at the address pointed by ``tensor``. If the tensor is not found, ``NRT_FAILURE`` is returned and nothing gets written at the address pointed by ``tensor``.
:param tensor_set: Pointer to a valid Tensorset containing the tensor.
:param tensor_name: Name associated with the searched ``nrt_tensor_t`` when it was added to this Tensorset. Might be different from the ``nrt_tensor_t``’s internal name.
:param tensor: Address where the address of the found ``nrt_tensor_t`` will be placed.
.. _api_exec:
The Execution API
^^^^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_execute(nrt_model_t *model, const nrt_tensor_set_t *input_set, nrt_tensor_set_t *output_set)
Runs one execution of the given ``nrt_model_t`` using the provided input tensor set and writing the results to the provided output tensor set.
:param model: Valid pointer to a `nrt_model_t` on which to run the execution.
:param input_set: Tensorset containing input data.
:param input_set: Tensor set where the output data will be written to.
.. c:function:: NRT_STATUS nrt_execute_repeat(nrt_model_t *model, const nrt_tensor_set_t *input_set, nrt_tensor_set_t *output_set, int repeat_count)
Same as ``nrt_execute`` but it will repeat the execution ``repeat_count`` times using the outputs from the n - 1th iteration as inputs for the nth iteration.
This requires a specially compiled NEFF and it's not a commonly used call.
:param model: Valid pointer to a `nrt_model_t` on which to run the execution.
:param input_set: Tensorset containing input data.
:param input_set: Tensor set where the output data will be written to.
:param repeat_count: Number of times to repeat this execution.
.. _api_profile:
The Profiling API
^^^^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_profile_start(nrt_model_t *model, const char *filename)
Begins profiling of the execution of the given model. The profile data will be written to the file specified by the path in ``filename``.
The file will be truncated if it exists.
:param model: Valid pointer to a `nrt_model_t` which will be profiled by the Neuron Runtime during execution.
:param filename: Path to a file where the profile will be written. If the file already exists, it will be truncated.
.. c:function:: NRT_STATUS nrt_profile_stop(const char *filename)
Ends profiling of the execution of a model and writes profile data to ``filename``. ``filename`` needs to be the same path as the one used for ``nrt_profile_start``.
:param filename: Path to a file where the profile will be written. If the file already exists, it will be truncated.
Other APIs
^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_get_version(nrt_version_t *ver, size_t size)
Fills a ``nrt_version_t`` struct with the provided size with version info. The ``size`` argument allows for backwards compatibility.
if the struct changes in future releases.
:param *ver: Pointer to a ``nrt_version_t`` structure which is currently defined as:
.. code-block:: c
typedef struct nrt_version {
uint64_t rt_major; // major version number
uint64_t rt_minor; // minor version number
uint64_t rt_patch; // patch version number
uint64_t rt_maintenance; // maintainance version number
char rt_detail[RT_VERSION_DETAIL_LEN]; // runtime version description string
char git_hash[GIT_HASH_LEN]; // runtime git hash
} nrt_version_t;
:param size_t size: Size of the ``nrt_version_t`` structure, should always be ``sizeof(nrt_version_t)``
.. c:function:: NRT_STATUS nrt_get_total_nc_count(uint32_t *nc_count)
Gets the total number of NeuronCores present on the current instance. The result is not affected by the values in
``NEURON_RT_NUM_CORES`` or ``NEURON_RT_VISIBLE_CORES`` and, in fact, this function can be called before calling ``nrt_init``.
:param nc_count: If the call completes successfully, the address will contain the total number of NeuronCores present on the instance.
.. c:function:: NRT_STATUS nrt_get_visible_nc_count(uint32_t *nc_count)
Gets the total number of NeuronCores available to the application after ``nrt_init`` has parsed the configuration environment variables ``NEURON_RT_NUM_CORES`` and ``NEURON_RT_VISIBLE_CORES``
(if provided).
:param nc_count: If the call completes successfully, the address will contain the total number of NeuronCores available to the application.
.. |nd_v1| image:: ../images/neuron-rt-nd-v1.png
.. |nrt_arch| image:: ../images/neuron-rt-architecture.png
.. |nrt_neff| image:: ../images/neuron-rt-neff.png
.. |nrt_neff_s| image:: ../images/neuron-rt-neff-s.png
.. |nrt_neff_m| image:: ../images/neuron-rt-neff-m.png
.. |nrt_neff_single| image:: ../images/neuron-rt-neff-single.png
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _nrt-api-guide:
Developer's Guide - Neuron Runtime
==================================
.. contents:: Table of contents
:local:
:depth: 3
Introduction
------------
This guide is intended to support a deeper understanding of the Neuron Runtime and how ML applications are built using the Runtime APIs directly.
Most customers will not need this level of detail as the interactions with the Neuron Runtime are already taken care by popular ML Frameworks with built-in Neuron support
such as torch-neuron and tensorflow-neuron.
This guide is focused on the information you need to know when building custom frameworks that will call libnrt APIs directly from C/C++ apps.
.. note::
The next few paragraphs provide a brief introduction to the Neuron hardware and the Neuron Runtime architecture. Customers who'd rather skip this and jump straight to building their first ML
application which runs without the aid of an ML framework, should go to :ref:`first_app`.
The Neuron Runtime Library (libnrt) is the intermediate layer between Application + Framework and Neuron Driver + Neuron Device.
It provides a C API for initializing the Neuron hardware, staging models and input data, executing inferences and training iterations on the staged models, and retrieving output data. The vast majority of ML applications running on Neuron will follow one of the following 3 architectural templates:
.. figure:: ../images/neuron-rt-diagram.png
`Individual processes executing models on one or more Neuron Devices`
.. figure:: ../images/neuron-rt-diagram-2.png
`Processes working together on executing models within the same instance - libnccom (The Neuron Collective Communication Library) handles inter-worker communication`
.. figure:: ../images/neuron-rt-diagram-3.png
`Processes working together on executing models across multiple instances - libnccom, libfabric and the EFA driver handle communication`
.. _reqs:
Required Software
-----------------
A more comprehensive guide to installing Neuron software can be found in the :ref:`torch_quick_start` guide.
The Neuron Runtime requires the Neuron Driver, which is provided by the ``aws-neuron-dkms`` package:
AL2:
.. code-block:: bash
sudo yum install aws-neuronx-dkms
Ubuntu:
.. code-block:: bash
sudo apt-get install aws-neuronx-dkms
The Runtime Library consists of the libnrt.so and header files. These artifacts are version controlled and installed via the ``aws-neuronx-runtime-lib`` package. After installing the package, the binary (``libnrt.so``) is found in
``/opt/aws/neuron/lib`` and the needed header files are found in ``/opt/aws/neuron/include``:
AL2:
.. code-block:: bash
sudo yum install aws-neuronx-runtime-lib
Ubuntu:
.. code-block:: bash
sudo apt-get install aws-neuronx-runtime-lib
For applications that use distributed training or distributed inferences, the Neuron Collective Communication Library is required:
AL2:
.. code-block:: bash
sudo yum install aws-neuronx-collectives
Ubuntu:
.. code-block:: bash
sudo apt-get install aws-neuronx-collectives
In case of multi-instance training, the EFA driver and the Libfabric library - provided by the EFA installer - need to be installed as well:
AL2 & Ubuntu:
.. code-block:: bash
curl -O https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz
wget https://efa-installer.amazonaws.com/aws-efa-installer.key && gpg --import aws-efa-installer.key
cat aws-efa-installer.key | gpg --fingerprint
wget https://efa-installer.amazonaws.com/aws-efa-installer-latest.tar.gz.sig && gpg --verify ./aws-efa-installer-latest.tar.gz.sig
tar -xvf aws-efa-installer-latest.tar.gz
cd aws-efa-installer && sudo bash efa_installer.sh --yes
cd
sudo rm -rf aws-efa-installer-latest.tar.gz aws-efa-installer
.. _insttypes:
Brief Introduction to Neuron Hardware
-------------------------------------
Neuron Machine Learning Accelerators (or Neuron Devices) are custom accelerators designed to efficiently execute Machine Learning workloads such as executing inference on a given model or running a distributed training job. Depending on the type of workload and its size, customers can opt for the following Neuron-equipped EC2 instances:
.. list-table::
:widths: 40 40 40 40 40
:header-rows: 1
* - Workload type
- Neuron Device Name
- Instance type(s)
- Devices Per Instance
- Availability
* - Inference
- Inferentia II (v3)
- inf2.xlarge, inf2.8xlarge
- 1
- Available Now!
* - Inference
- Inferentia II (v3)
- inf2.24xlarge
- 6
- Available Now!
* - Inference
- Inferentia II (v3)
- inf2.48xlarge
- 12
- Available Now!
* - Inference
- Inferentia (v1)
- inf1.xlarge, inf1.2xlarge
- 1
- Available Now!
* - Inference
- Inferentia (v1)
- inf1.6xlarge
- 4
- Available Now!
* - Inference
- Inferentia (v1)
- inf1.24xlarge
- 16
- Available Now!
* - Training
- Trainium (v2)
- trn1.2xlarge
- 1
- Available Now!
* - Training
- Trainium (v2)
- trn1.32xlarge
- 16
- Available Now!
Neuron Device
^^^^^^^^^^^^^
Each Neuron Device consists of multiple execution units - called NeuronCores, a high throughput device memory, PCIe interfaces to the host CPU and to the other Neuron Devices and other components, depending on the Neuron Device version.
To get the number of NeuronCores per Neuron Device, the amount of Neuron Device memory and the way devices are directly connected, use the ``neuron-ls`` tool:
.. code-block:: bash
neuron-ls --topology
instance-type: trn1.32xlarge
instance-id: i-0633517e496256bf8
+--------+--------+--------+---------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+---------------+---------+
| 0 | 2 | 32 GB | 12, 3, 4, 1 | 10:1c.0 |
| 1 | 2 | 32 GB | 13, 0, 5, 2 | 10:1d.0 |
| 2 | 2 | 32 GB | 14, 1, 6, 3 | a0:1c.0 |
| 3 | 2 | 32 GB | 15, 2, 7, 0 | a0:1d.0 |
| 4 | 2 | 32 GB | 0, 7, 8, 5 | 20:1b.0 |
| 5 | 2 | 32 GB | 1, 4, 9, 6 | 20:1c.0 |
| 6 | 2 | 32 GB | 2, 5, 10, 7 | 90:1b.0 |
| 7 | 2 | 32 GB | 3, 6, 11, 4 | 90:1c.0 |
| 8 | 2 | 32 GB | 4, 11, 12, 9 | 20:1d.0 |
| 9 | 2 | 32 GB | 5, 8, 13, 10 | 20:1e.0 |
| 10 | 2 | 32 GB | 6, 9, 14, 11 | 90:1d.0 |
| 11 | 2 | 32 GB | 7, 10, 15, 8 | 90:1e.0 |
| 12 | 2 | 32 GB | 8, 15, 0, 13 | 10:1e.0 |
| 13 | 2 | 32 GB | 9, 12, 1, 14 | 10:1b.0 |
| 14 | 2 | 32 GB | 10, 13, 2, 15 | a0:1e.0 |
| 15 | 2 | 32 GB | 11, 14, 3, 12 | a0:1b.0 |
+--------+--------+--------+---------------+---------+
Neuron Device Topology
* * * *
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 0 ]◄––►[ 1 ]◄––►[ 2 ]◄––►[ 3 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 4 ]◄––►[ 5 ]◄––►[ 6 ]◄––►[ 7 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 8 ]◄––►[ 9 ]◄––►[10 ]◄––►[11 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[12 ]◄––►[13 ]◄––►[14 ]◄––►[15 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
* * * *
|nd_v1|
NeuronCore
^^^^^^^^^^
The NeuronCore is the primary execution unit within the accelerator. Each NeuronCore contains several execution engines
(for different types of compute operations such as tensor-based, vector and scalar), DMA engines, and a local cache.
A NeuronCore can operate independently or together with other NeuronCores, depending on the nature of the workload and the way
a model is compiled and loaded to the NeuronCores in the accelerator. Each execution engine can access the cache and DRAM attached to the accelerator device.
The primary form of data movement between the host CPU and the accelerator device, as well as between the device DRAM and NeuronCores, is Direct Memory Access (DMA).
The use of DMA enables more efficient data movement.
The Neuron Runtime Architecture
-------------------------------
|nrt_arch|
Application Interface Layer (The ``libnrt`` API)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Application Interface Layer allows applications and frameworks to use the available Neuron Devices to run
inference or training workloads. A complete reference of the C interface can be found in :ref:`nrt_api`.
Monitoring and Profiling
^^^^^^^^^^^^^^^^^^^^^^^^
The Neuron Runtime is able to capture key execution metrics which can be read in real-time using ``neuron-monitor`` and
``neuron-top``. ``neuron-monitor`` allows forwarding those metrics to Cloudwatch or a Prometheus server, enabling fleet-wide
monitoring - for more on that please refer to the ``neuron-monitor`` usage guide :ref:`neuron-monitor-ug`.
Profiling an execution is another feature of the Neuron Runtime - which provides an API for starting and stopping profiling,
as well as saving the profile data to a file, which can be used by tools such as the Neuron Tensorboard. This API is
documented in :ref:`api_profile` section.
The NEFF format and NEFF Parser
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A NEFF (*N*euron *E*xecutable *F*ile *F*ormat) is a single file container for all the artifacts needed to execute a model on one or more NeuronCores.
A NEFF is the output of the Neuron Compiler (neuron-cc). It contains Neuron machine instructions, pseudo instructions (compiler-generated instructions
which are parsed and replaced with Neuron instructions by the Neuron Runtime when the model loads), tensor information, model parameters and other components
that support the model's execution on one or more NeuronCores.
Operators that are not supported by Neuron can be compiled into CPU-executable binary and included into the NEFF as well.
The contents of a NEFF can be shown by using ``neuron-packager`` tool (which will be released soon).
Usually there is only one subgraph (which is executed on a single NeuronCore) in a NEFF:
.. code-block:: bash
NEFF Nodes:
NODE Executor Name Variable Size Type Format Shape DataType TimeSeries
1 Neuron Core sg00
image:0 3259008 IN NHWC [1 3 552 984]
net_output:0 1323972 OUT NHWC [1 78 69 123] false
In this example, there is a single subgraph, one input and one output:
|nrt_neff_single|
Some NEFFs can have multiple subgraphs (which will be deployed by the runtime on separate NeuronCores) and multiple CPU operators, as exemplified below:
.. code-block:: bash
NEFF Nodes:
NODE Executor Name Variable Size Type Format Shape DataType TimeSeries
1 Neuron Core sg00
input:0 2 IN NHWC [1 1 1 1]
nn/relu1:0 2 OUT NHWC [1 1 1 1] false
1 Neuron Core sg01
nn/relu1:0 2 IN NHWC [1 1 1 1]
nn/relu2:0 2 OUT NHWC [1 1 1 1] false
2 CPU fused_3_layout_transform
layout_transform0:0 0 OUT []
4 CPU fused_2_nn_conv2d_nn_relu
constant0 2 IN [1 1 1 1] float16
nn.relu0:0 0 OUT []
5 CPU fused_1_layout_transform_copy
nn/relu3:0 0 OUT []
6 Neuron Core sg02
nn/relu3:0 2 IN NHWC [1 1 1 1]
nn/relu4:0 2 OUT NHWC [1 1 1 1] false
6 Neuron Core sg03
nn/relu4:0 2 IN NHWC [1 1 1 1]
nn/output:0 2 OUT NHWC [1 1 1 1] false
The output above can be summarized by the graph below:
|nrt_neff|
The nodes marked with dark blue are intermediate tensors that are handled internally by the Neuron Runtime.
The other blue nodes are inputs/outputs. The green colored box indicates the operator is executed on the NeuronCore while
the red color box indicates the execution is done on the CPU.
The NEFF layer in Neuron Runtime is responsible for parsing a NEFF, validating it, and translating pseudo instructions into hardware specific
instructions and DMA descriptors.
Graph Walker and CPU Node Executor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As shown in the previous section, a NEFF can contain one or more nodes. During execution, the Neuron Runtime Graph Walker executes each node
one by one and handles copying input and output between each of them. If a node needs to be executed by the CPU, then a corresponding library function, found
in a .so file in the NEFF, is dynamically loaded using ``dlopen()`` during model load and executed during model execution. Since this library function is executed in the calling
thread’s context, the workload can be efficiently parallelized using a multi-threaded approach.
In the example below, each invocation of ``nrt_execute()`` would take 23ms: the first CPU node takes 1ms, the NeuronCore execution takes 20ms and the second CPU node takes 2 ms,
so the total latency is 23ms and the throughput is 43 calls per second (1000/23).
|nrt_neff_s|
If multiple threads are used, subsequent executions would be pipelined inside the runtime, hence increasing the throughput in this case to ~50 (1000/20).
|nrt_neff_m|
User Mode Driver
^^^^^^^^^^^^^^^^
This is the lowest level component of the Neuron Runtime and handles programming the engines, managing memory,
creating DMA descriptors to move data from host and device, handling notifications etc.
Memory Management
~~~~~~~~~~~~~~~~~
The Neuron Runtime is responsible with managing Neuron Device and host memory for the running models. The application is responsibile with
deallocating every loaded model and allocated tensor so the proper deallocation method needs to be called.
For more details, refer to :ref:`nrt_api` documentation.
Tools such as ``neuron-top`` and ``neuron-monitor`` can be used to determine the amount of memory being used at any given time.
.. _first_app:
Building the first Neuron application
-------------------------------------
The simple application presented here will load a NEFF file, use the provided binary files' contents as input tensors
(if a file wasn't provided for an input tensor, that input tensor will be zero-filled), and save the output tensors as
binary files.
Prerequisites
^^^^^^^^^^^^^
Building the application requires:
* a recent version of GCC
* installing the ``aws-neuronx-runtime-lib`` package as described in :ref:`reqs`
Running the built application requires:
* a Neuron-equipped instance as shown in :ref:`insttypes`
* installing the ``aws-neuronx-runtime-lib`` and the ``aws-neuronx-dkms`` package as described in :ref:`reqs`
* a NEFF file
Getting a NEFF file
^^^^^^^^^^^^^^^^^^^
When running any workload through a Neuron framework, the compiled NEFFs will be placed in ``/var/tmp/neuron-compile-cache``.
Additionally, setting the ``NEURON_FRAMEWORK_DEBUG`` environment variable to ``1`` before running the workload will enable
the compiled NEFFs to be written to the current directory.
The Code
^^^^^^^^
.. code-block:: c
#include <stdbool.h>
#include <nrt/nrt.h>
#include <nrt/nrt_experimental.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <time.h>
#include <errno.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <pthread.h>
#include <fcntl.h>
#include <stdint.h>
#include <unistd.h>
// Function to mmap a file in the application's memory space,
// it will return a pointer to the mmapped memory and the size
// of the mmapped data will be written to *size
void *mmap_file(const char *filepath, size_t *size) {
struct stat sb;
int fd = open(filepath, O_RDONLY);
if (fd < 0 || fstat(fd, &sb) != 0) {
fprintf(stderr, "Unable to open %s: %s\n", filepath, strerror(errno));
return MAP_FAILED;
}
*size = sb.st_size;
return mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
}
#define P_ERR(...) fprintf(stderr, __VA_ARGS__)
#define CHECK_RESULT(res, expected, ...) \
if (res != expected) { \
fprintf(stderr, __VA_ARGS__); \
exit(-1); \
}
// struct used to load input tensors from files
typedef struct {
char *name;
size_t size;
void *data;
} input_tensor_info_t;
// simple container for input_tensor_info_t
typedef struct {
input_tensor_info_t *entries;
int entry_count;
} input_tensor_info_array_t;
// Allocate tensorsets and tensors based on the info_array and returns a valid tensorset in out_tset
// containing all the newly allocated tensors
NRT_STATUS allocate_tensors(nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type, nrt_tensor_set_t **out_tset) {
NRT_STATUS result;
int tensor_idx;
nrt_tensor_info_t *tensor_info = NULL;
nrt_tensor_t *tensor = NULL;
// We allocate a nrt_tensor_set which acts as a containers for nrt_tensors
result = nrt_allocate_tensor_set(out_tset);
if (result != NRT_SUCCESS) {
P_ERR("Couldn't allocate %s tensorset\n", usage_type == NRT_TENSOR_USAGE_INPUT ? "input" : "output");
}
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
tensor_info = &info_array->tensor_array[tensor_idx];
if (tensor_info->usage != usage_type) {
continue;
}
// Allocate the tensor with the name and size found in tensor_info_array
result = nrt_tensor_allocate(NRT_TENSOR_PLACEMENT_DEVICE, 0, tensor_info->size,
tensor_info->name, &tensor);
if (result != NRT_SUCCESS) {
P_ERR("Couldn't allocate tensor %s\n", tensor_info->name);
return result;
}
// Finally add the tensors to the newly allocated tensor set
result = nrt_add_tensor_to_tensor_set(*out_tset, tensor_info->name, tensor);
if (result != NRT_SUCCESS) {
P_ERR("Couldn't add tensor %s to tensorset\n", tensor_info->name);
return result;
}
}
return NRT_SUCCESS;
}
// Tensor iterator handler - returns false if the iteration needs to stop
typedef bool (*tensor_handler)(nrt_tensor_t *, nrt_tensor_info_t *, NRT_STATUS *, void *);
// Iterates through all the tensors in the given tensorset, based on the data in info_array for the given usage_type
// and calls the handler function with the provided args pointer
// Will return the first error returned by a handler
NRT_STATUS iterate_tensors(nrt_tensor_set_t *tset, nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type,
tensor_handler handler, void *args) {
NRT_STATUS result = NRT_SUCCESS;
NRT_STATUS final_result = NRT_SUCCESS;
int tensor_idx;
nrt_tensor_info_t *tensor_info = NULL;
nrt_tensor_t *tensor = NULL;
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
tensor_info = &info_array->tensor_array[tensor_idx];
if (tensor_info->usage != usage_type) {
continue;
}
result = nrt_get_tensor_from_tensor_set(tset, tensor_info->name, &tensor);
if (result != NRT_SUCCESS) {
P_ERR("Tensor %s not found in tensor set\n", tensor_info->name);
continue;
}
result = NRT_SUCCESS;
if ((*handler)(tensor, tensor_info, &result, args) == false) {
return result;
}
if (final_result == NRT_SUCCESS && result != final_result) {
final_result = result;
}
}
return final_result;
}
// Tensor iteration handler that checks if a tensor has an input file associated with it
// based on the CLI args
bool handler_load_inputs(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
NRT_STATUS res;
int idx;
input_tensor_info_array_t *info_array = (input_tensor_info_array_t *)args;
bool input_found = false;
for (idx = 0; idx < info_array->entry_count; idx++) {
if (strcmp(info_array->entries[idx].name, tensor_info->name) != 0) {
continue;
}
if (info_array->entries[idx].size != tensor_info->size) {
P_ERR("Input file for tensor %s has incorrect size %lu, expected %lu\n",
tensor_info->name, info_array->entries[idx].size, tensor_info->size);
break;
}
res = nrt_tensor_write(tensor, info_array->entries[idx].data, 0, tensor_info->size);
if (res != NRT_SUCCESS) {
P_ERR("Unable to write content to input tensor %s\n", tensor_info->name);
} else {
input_found = true;
}
}
if (!input_found) {
fprintf(stderr, "Input tensor %s will be zero-filled\n", tensor_info->name);
}
*result = NRT_SUCCESS;
return true;
}
// Tensor iteration handler that saves outputs
bool handler_save_outputs(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
static char filename[280];
int fd;
// Allocating a buffer large enough to read the entire tensor
void *tensor_data = malloc(tensor_info->size);
*result = NRT_SUCCESS;
if (tensor_data == NULL) {
fprintf(stderr, "Unable to allocate memory for saving output tensor %s\n", tensor_info->name);
*result = NRT_FAILURE;
return true;
}
// Reading the tensor to the newly allocated buffer
*result = nrt_tensor_read(tensor, tensor_data, 0, tensor_info->size);
if (*result != NRT_SUCCESS) {
fprintf(stderr, "Unable to read tensor %s\n", tensor_info->name);
free(tensor_data);
return true;
}
// Saving the tensor to a file
snprintf(filename, 280, "%s.out", tensor_info->name);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0644);
if (fd < 0) {
fprintf(stderr, "Unable to open %s for writing\n", filename);
free(tensor_data);
*result = NRT_FAILURE;
return true;
}
if (write(fd, tensor_data, tensor_info->size) != tensor_info->size) {
*result = NRT_FAILURE;
fprintf(stderr, "Unable to write tensor %s contents to file %s\n", tensor_info->name, filename);
}
close(fd);
free(tensor_data);
return true;
}
// Tensor iteration handler that deallocates tensors
bool handler_free_tensor(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
*result = NRT_SUCCESS;
nrt_tensor_free(&tensor);
return true;
}
int main(int argc, char *argv[]) {
NRT_STATUS result;
int idx = 0;
int tensor_idx = 0;
void *neff_data = NULL;
size_t neff_size = 0;
void *input_data = NULL;
input_tensor_info_array_t input_tensor_info_array = {0};
input_tensor_info_t *current_input = NULL;
nrt_model_t *model = NULL;
nrt_tensor_set_t *inputs = NULL;
nrt_tensor_set_t *outputs = NULL;
nrt_tensor_t *tensor = NULL;
nrt_tensor_info_array_t *tensor_info_array = NULL;
if (argc < 2) {
fprintf(stderr, "Incorrect number of args, usage: exec_test file.neff [input_1_name] [input_1_file] ...\n");
exit(-1);
}
// Try mmapping the NEFF file first, so we can fail fast if not found or
// mmap fails
neff_data = mmap_file(argv[1], &neff_size);
if (neff_data == MAP_FAILED) {
fprintf(stderr, "Unable to map file %s\n", argv[1]);
exit(-1);
}
// mmap input tensor files (if any provided) and fill the input_tensor_info array
if (argc > 3) {
input_tensor_info_array.entries = malloc((argc - 2 / 2) * sizeof(input_tensor_info_t));
for (idx = 2; idx < argc; idx += 2) {
if (idx + 1 >= argc) {
break;
}
current_input = &input_tensor_info_array.entries[input_tensor_info_array.entry_count];
input_data = mmap_file(argv[idx + 1], &current_input->size);
if (input_data == MAP_FAILED) {
fprintf(stderr, "Unable to mmap inputs file %s\n", argv[idx + 1]);
continue;
}
current_input->name = argv[idx];
current_input->data = input_data;
input_tensor_info_array.entry_count++;
}
}
// Before calling any nrt API, nrt_init must be called
// Since this is not running as part of a framework, the correct parameter for 'framework' is
// NRT_FRAMEWORK_TYPE_NO_FW and the others can be empty strings
result = nrt_init(NRT_FRAMEWORK_TYPE_NO_FW, "", "");
CHECK_RESULT(result, NRT_SUCCESS, "NRTLIB could not be initialized, error: %d\n", (int)result);
// Loading the NEFF
printf("Loading NEFF\n");
result = nrt_load(neff_data, neff_size, -1, -1, &model);
CHECK_RESULT(result, NRT_SUCCESS, "Unable to load NEFF\n");
// In order to allocate tensors, first we need to call nrt_get_model_tensor_info which
// will give us the model tensors' names and sizes in tensor_info_array
printf("Getting IO tensor information\n");
result = nrt_get_model_tensor_info(model, &tensor_info_array);
CHECK_RESULT(result, NRT_SUCCESS, "Unable to get model tensor information\n");
// Allocating tensors
printf("Creating I/O data (%ld tensors)\n", tensor_info_array->tensor_count);
result = allocate_tensors(tensor_info_array, NRT_TENSOR_USAGE_INPUT, &inputs);
CHECK_RESULT(result, NRT_SUCCESS, "Error allocating input tensors\n");
result = allocate_tensors(tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, &outputs);
CHECK_RESULT(result, NRT_SUCCESS, "Error allocating input tensors\n");
// Loading input files (if provided)
iterate_tensors(inputs, tensor_info_array, NRT_TENSOR_USAGE_INPUT, handler_load_inputs,
(void*) &input_tensor_info_array);
// Executing model using the tensors in the inputs tensorset and writing the outputs to the tensors
// in the outputs tensorset
result = nrt_execute(model, inputs, outputs);
CHECK_RESULT(result, NRT_SUCCESS, "Error during model execution: %d\n", result);
// Saving outputs to files
result = iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_save_outputs, NULL);
if (result != NRT_SUCCESS) {
P_ERR("Error saving outputs to files\n");
}
// Unloading the model
result = nrt_unload(model);
if (result != NRT_SUCCESS) {
P_ERR("Unable to unload NEFF\n");
}
printf("Freeing tensors\n");
iterate_tensors(inputs, tensor_info_array, NRT_TENSOR_USAGE_INPUT, handler_free_tensor, NULL);
iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_free_tensor, NULL);
nrt_destroy_tensor_set(&inputs);
nrt_destroy_tensor_set(&outputs);
printf("Deallocating model tensor info\n");
// We are done with the tensor_info_array, we can dispose of it
nrt_free_model_tensor_info(tensor_info_array);
printf("Deallocating inputs tensor info\n");
// Unmapping the input files
for (tensor_idx = 0; tensor_idx < input_tensor_info_array.entry_count; tensor_idx++) {
munmap(input_tensor_info_array.entries[tensor_idx].data, input_tensor_info_array.entries[tensor_idx].size);
}
if (input_tensor_info_array.entries) {
free(input_tensor_info_array.entries);
}
// Clean-up the runtime
printf("Cleaning up the runtime\n");
nrt_close();
printf("DONE\n");
}
Building the example:
.. code-block:: bash
gcc run_neff.c -o run_neff -lnrt -pthread -I/opt/aws/neuron/include -L/opt/aws/neuron/lib
Running the example:
.. code-block:: bash
./run_neff my.neff [input_1] [input_1.bin] [input_2] [input_2.bin] ...
Code Breakdown
^^^^^^^^^^^^^^
Initialization and cleanup
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
// ...
result = nrt_init(NRT_FRAMEWORK_TYPE_NO_FW, "", "");
// ...
nrt_close();
The Neuron Runtime is initialized by calling ``nrt_init`` and all applications should call ``nrt_close`` once they're done
using it. For more details on these functions, go to the :ref:`api_init` section.
Loading the NEFF
~~~~~~~~~~~~~~~~
Once the contents of a NEFF file have been mapped to virtual memory using mmap ...
.. code-block:: c
// ...
void *mmap_file(const char *filepath, size_t *size) {
struct stat sb;
int fd = open(filepath, O_RDONLY);
if (fd < 0 || fstat(fd, &sb) != 0) {
fprintf(stderr, "Unable to open %s: %s\n", filepath, strerror(errno));
return MAP_FAILED;
}
*size = sb.st_size;
return mmap(NULL, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
}
// ...
neff_data = mmap_file(argv[1], &neff_size);
... the NEFF is loaded using ``nrt_load``. The runtime will decide the optimal placement for the model - it will
choose the best NeuronCore on which to deploy the model:
.. code-block:: c
// ...
result = nrt_load(neff_data, neff_size, -1, -1, &model);
// ...
The call will return a valid model handle in ``nrt_model_t*`` which will subsequently be
used for other calls to the Runtime API (such as ``nrt_execute``).
For more details on the model API (including ``nrt_load``), go to the :ref:`api_model` section.
Creating input/output tensors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The main container for tensors is the ``nrt_tensor_set_t*``. Tensors (``nrt_tensor_t*``) are not passed directly to the NEFF execution function, ``nrt_execute``,
they have to be wrapped in a ``nrt_tensor_set_t*``. The ``allocate_tensors`` function will allocate the tensorset and the tensors for the requested usage type
(``NRT_TENSOR_USAGE_INPUT`` or ``NRT_TENSOR_USAGE_OUTPUT``) and return the tensorset containing the allocated tensors in ``out_tset``.
.. code-block:: c
NRT_STATUS allocate_tensors(nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type, nrt_tensor_set_t **out_tset) {
// ...
// We allocate a nrt_tensor_set which acts as a containers for nrt_tensors
result = nrt_allocate_tensor_set(out_tset);
// ...
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
tensor_info = &info_array->tensor_array[tensor_idx];
if (tensor_info->usage != usage_type) {
continue;
}
// ...
// Allocate the tensor with the name and size found in tensor_info_array
result = nrt_tensor_allocate(NRT_TENSOR_PLACEMENT_DEVICE, 0, tensor_info->size,
tensor_info->name, &tensor);
// ...
// Finally add the tensors to the newly allocated tensor set
result = nrt_add_tensor_to_tensor_set(*out_tset, tensor_info->name, tensor);
// ...
}
// ...
}
Iterating through tensors in an nrt_tensor_set_t
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A helper function, ``iterate_tensors`` is used to iterate through the ``nrt_tensor_t`` in a tensorset and call the function
``handler`` for each of them. If the handler function returns ``false`` iteration ends. ``iterate_tensors`` returns the first error
reported by the handler function.
.. code-block:: c
// Tensor iterator handler - returns false if the iteration needs to stop
typedef bool (*tensor_handler)(nrt_tensor_t *, nrt_tensor_info_t *, NRT_STATUS *, void *);
NRT_STATUS iterate_tensors(nrt_tensor_set_t *tset, nrt_tensor_info_array_t *info_array, nrt_tensor_usage_t usage_type,
tensor_handler handler, void *args) {
// ...
for (tensor_idx = 0; tensor_idx < info_array->tensor_count; tensor_idx++) {
// ...
result = nrt_get_tensor_from_tensor_set(tset, tensor_info->name, &tensor);
// ...
if ((*handler)(tensor, tensor_info, &result, args) == false) {
return result;
}
// ...
}
Deallocating input/output tensors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After the execution is complete, the tensors are deallocated using ``iterate_tensors`` and the tensorsets are deallocated
using ``nrt_destroy_tensor_set``:
.. code-block:: c
iterate_tensors(inputs, tensor_info_array, NRT_TENSOR_USAGE_INPUT, handler_free_tensor, NULL);
iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_free_tensor, NULL);
nrt_destroy_tensor_set(&inputs);
nrt_destroy_tensor_set(&outputs);
The ``handler_free_tensor`` function simply deallocates the given tensor:
.. code-block:: c
bool handler_free_tensor(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
// ...
nrt_tensor_free(&tensor);
// ...
}
For more details on the tensor API, check out the :ref:`api_tensor` and the :ref:`api_tensorset` sections.
Executing the NEFF
~~~~~~~~~~~~~~~~~~
The NEFF is executed using a call to ``nrt_execute``. If ``nrt_execute`` completes successfully, the output tensors are
read and saved to files (one binary file per output tensor) using ``iterate_tensors``:
.. code-block:: c
// Executing model using the tensors in the inputs tensorset and writing the outputs to the tensors
// in the outputs tensorset
result = nrt_execute(model, inputs, outputs);
// ...
// Saving outputs to files
result = iterate_tensors(outputs, tensor_info_array, NRT_TENSOR_USAGE_OUTPUT, handler_save_outputs, NULL);
The iteration handler reads the tensor data and writes it to a file with the same name as the tensor:
.. code-block:: c
bool handler_save_outputs(nrt_tensor_t *tensor, nrt_tensor_info_t *tensor_info, NRT_STATUS *result, void* args) {
// ...
void *tensor_data = malloc(tensor_info->size);
// ...
// Reading the tensor to the newly allocated buffer
*result = nrt_tensor_read(tensor, tensor_data, 0, tensor_info->size);
// ...
// Saving the tensor to a file
snprintf(filename, 280, "%s.out", tensor_info->name);
fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0644);
// ...
if (write(fd, tensor_data, tensor_info->size) != tensor_info->size) {
// ...
}
close(fd);
For more details on the execution API, go to the :ref:`api_exec` section.
.. _nrt_api:
The LIBNRT API
------------------
API Return Codes
^^^^^^^^^^^^^^^^
All API calls will return an NRT_STATUS value representing the return status of the call. In case of an error, an error message
will also be logged (based on the logging settings, more on that in the next section). The table below contains all the possible error codes.
Please note that some error codes only apply to certain API calls.
.. list-table::
:widths: 40 260
:header-rows: 1
* - Return Code
- Error
* - ``NRT_SUCCESS``
- Call was successful
* - ``NRT_FAILURE``
- Generic failure
* - ``NRT_INVALID``
- Invalid NEFF, bad instruction, bad DMA descriptor, input tensor name/size does not match the model, etc.
* - ``NRT_INVALID_HANDLE``
- Invalid handle (e.g. an invalid model handle)
* - ``NRT_RESOURCE``
- Failed to allocate a resource for the requested operation
* - ``NRT_TIMEOUT``
- Operation timed out
* - ``NRT_HW_ERROR``
- Hardware failure
* - ``NRT_LOAD_NOT_ENOUGH_NC``
- The number of available NeuronCores is insufficient for the requested operation
* - ``NRT_UNSUPPORTED_NEFF_VERSION``
- NEFF version unsupported
* - ``NRT_UNINITIALIZED``
- Returned when attempting an API call when the library is not initialized
* - ``NRT_CLOSED``
- Returned when attempting an API call after ``nrt_close()`` was called
* - ``NRT_EXEC_BAD_INPUT``
- Invalid input has been submitted to nrt_execute()
* - ``NRT_EXEC_COMPLETED_WITH_NUM_ERR``
- Execution completed with numerical errors (produced NaN)
* - ``NRT_EXEC_COMPLETED_WITH_ERR``
- Execution was completed with other errors, either logical (event double clear), or hardware (parity error)
* - ``NRT_EXEC_NC_BUSY``
- The neuron core is locked (in use) by another model/thread
.. _api_init:
Initialization, configuration and teardown
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_init(nrt_framework_type_t framework, const char *fw_version, const char *fal_version)
Initializes the Neuron Runtime’s internal state and the Neuron hardware’s state.
This should be called before any other nrt_* call is attempted - although a small set of functions
are exempt from this rule (for example ``nrt_get_total_nc_count`` and ``get_nrt_version``). Any call to the NRT
library API will return NRT_FAILURE if ``nrt_init`` has not been called beforehand and that API call requires it.
The runtime can be configured by setting the appropriate environment variable before this API call.
The list of available environment variables is found in the :ref:`api_config` section.
:param framework: Can be one of:
``NRT_FRAMEWORK_TYPE_INVALID, // Invalid framework
NRT_FRAMEWORK_TYPE_NO_FW, // No framework
NRT_FRAMEWORK_TYPE_TENSORFLOW, // Tensorflow
NRT_FRAMEWORK_TYPE_PYTORCH, // Pytorch
NRT_FRAMEWORK_TYPE_MXNET // Mxnet``
This argument is used by our Neuron Tools to determine the type of application running,
it has no other impact on the functioning of the runtime.
Application using a custom framework or calling the Neuron Runtime directly should use ``NRT_FRAMEWORK_TYPE_NO_FW``.
:param const char *fw_version: version of the framework on top of which this runtime is running
:param const char *fal_version: version of the framework adapter on top of which this runtime is running
Applications using `NRT_FRAMEWORK_TYPE_NO_FW` for the first argument should use two empty strings for the versions.
.. _api_config:
Environment variables used to configure the Runtime Library
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``NEURON_RT_LOG_LOCATION=<CONSOLE/SYSLOG>, default=CONSOLE``
Chooses the output target for the Neuron Runtime logs (either console or syslog).
``NEURON_RT_LOG_LEVEL=<ERROR/WARN/INFO/DEBUG/TRACE>, default=ERROR``
Specifies the logging verbosity for the Neuron Runtime library, from ERROR (least verbose), to TRACE (most verbose).
``NEURON_RT_NUM_CORES=<n>``
Specifies how many NeuronCores are needed for the application. During ``nrt_init`` the requested number of NeuronCores are **exclusively** associated with the calling processes and
become unavailable to any other process attempting to use them. If there aren't enough NeuronCores available, ``nrt_init`` will return an error. Once the owner process has called ``nrt_close``
or exited, the NeuronCores are released and become available to be associated with another process. By default, all NeuronCores present on the instance will be made available to the caller.
``NEURON_RT_VISIBLE_CORES=<m,n,p-q>``
Similarly to the previous, it allows the calling process to get exclusive access to a set of NeuronCores, but it allows explicitly specifying which NeuronCores are available for the application based on their zero-based indices.
This variable can be a list of NeuronCores, for example: ``NEURON_RT_VISIBLE_CORES=3,4,5,6``, a range of NeuronCores, for example: ``NEURON_RT_VISIBLE_CORES=3-6``, or a combination of both: ``NEURON_RT_VISIBLE_CORES=3-5,6``.
The resulting range must be contiguous, for example this is not valid: ``NEURON_RT_VISIBLE_CORES=3,5,6`` because 4 is missing from the list, and indices need to be provided in consecutive increasing order.
.. note::
If both ``NEURON_RT_VISIBLE_CORES`` are ``NEURON_RT_NUM_CORES`` are defined, ``NEURON_RT_VISIBLE_CORES`` will be used.
``NEURON_RT_ROOT_COMM_ID=<ip_address:port>``
Mandatory for applications that run workloads containing Collective Communication operators, allows specifying the IP address and assign a port for the rank 0 worker in the Collective Compute worker pool.
For example: ``NEURON_RT_ROOT_COMM_ID=10.0.1.2:46820``.
``NEURON_RT_STOCHASTIC_ROUNDING_SEED=<value>``
Allows setting a value for the stochastic rounding seed. Has no effect on inf1.
``NEURON_RT_DEBUG_MEMLOG_MAX_SIZE=<value>, default=1024*1024``
Allows changing the number of entries in the memory allocations log. This log contains an entry for every allocation and deallocation and will be dumped to a file in case of a memory allocation failure in CSV format.
.. c:function:: NRT_STATUS nrt_close()
Closes all the devices used by the application (as defined by ``NEURON_RT_NUM_CORES``/``NEURON_RT_VISIBLE_CORES``)
and cleans up the runtime state. Note that once ``nrt_close`` has been called, most nrt_* API calls will fail if attempted.
.. _api_model:
The Model API
^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_load(const void *neff_bytes, size_t size, int32_t start_nc, int32_t nc_count, nrt_model_t **model)
Loads a NEFF file whose content is found in `neff_bytes`, with the given size, placing it on ``nc_count`` NeuronCores starting with NeuronCore index `start_nc`.
If either ``nc_count`` or ``start_nc`` are -1, an optimal value for each will be determined automatically. The model can be configured using a list of environment
variables read inside this API call which can be found in the :ref:`model_env` section. It returns a handle to the loaded model in the ``nrt_model_t*``
pointer if the call succeeds. The returned handle represents the loaded model and can be used with calls that operate on an ``nrt_model_t*`` (such as ``nrt_execute``).
:param neff_bytes: Pointer to existing NEFF file data
:param size: Size of data in ``neff_bytes``
:param start_nc: Index of the NeuronCore on which to stage the model. The first NeuronCore owned by the application will always have the index ``0`` - for example, even if when setting ``NEURON_RT_VISIBLE_CORES=3,4``, the two NeuronCores will be referred to as ``0`` and ``1``. If -1, an optimal index will be automatically determined (based on current NeuronCore usage).
:param nc_count: Number of NeuronCores on which to stage the model. If its value is a multiple of the amount of NeuronCores needed by the model, the model will be replicated on the number of NeuronCores specified in the argument. This feature is called **TBD** and it will be explained in detail in a separate section. If its value is -1, the model will be staged a single time, using the number of cores needed by a single instance of the model.
:param model: Model handle returned by the call which can be passed to other functions that operate on models (such as ``nrt_execute``).
.. _model_env:
Environment variables used to configure a model being loaded
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``NEURON_RT_EXEC_TIMEOUT=<n>, default=30 (inf1), default=600(trn1,inf2)``
Maximum of time, in seconds, allowed for one execution before timing out - which will cause the call to ``nrt_execute`` to fail and return ``NRT_TIMEOUT``.
``NEURON_RT_VALIDATE_HASH=<true/false>, default=false``
Verify the integrity of NEFF data being loaded by checking against a checksum found in the header.
``NEURON_RT_STOCHASTIC_ROUNDING_EN=<true/false>, default=false``
Enable stochastic rounding.
.. c:function:: NRT_STATUS nrt_load_collectives(const void *neff_bytes, size_t size, int32_t start_nc, int32_t nc_count, uint32_t g_device_id, uint32_t g_device_count, nrt_model_t **model)
Same as ``nrt_load`` (same environment variables can be used to configure the model), but must be used when loading NEFFs containing Collective Communication operators. Uses the same arguments as `nrt_load`, but adds 2 extra ones.
:param neff_bytes: Pointer to existing NEFF file data
:param size: Size of data in ``neff_bytes``
:param start_nc: Index of NeuronCore on which to stage the model. If -1, an optimal index will be automatically determined (based on current NeuronCore usage).
:param nc_count: Number of NeuronCores on which to stage the model. If its value is a multiple of the amount of NeuronCores needed by the model, the model will be replicated on the number of NeuronCores specified in the argument. This feature is called **TBD** and it will be explained in detail in a separate section. If its value is -1, the model will be staged a single time, using the number of cores needed by a single instance of the model.
:param g_device_id: Globally unique ID within the Collective Communication world associated with this model instance.
:param g_device_count: Size of the Collective Communication world (total number of participating unique IDs).
:param model: Model handle returned by the call which can be passed to other functions that operate on models (such as ``nrt_execute``).
.. c:function:: NRT_STATUS nrt_unload(nrt_model_t *model)
Unloads the given model and frees up device and host resources.
:param model: Pointer to model to unload. All data associated with the model is deleted, do not reuse the pointer or try to deallocate it afterwards. Do not call ``nrt_unload`` again on the same ``nrt_model_t*`` pointer (think of it as a call to `free()`).
.. c:function:: NRT_STATUS nrt_get_model_nc_count(const nrt_model_t *model, uint32_t *nc_count)
Gets the number of NeuronCores used by the model and writes that value at the address pointed by ``nc_count``.
:param model: Valid pointer to an ``nrt_model_t``.
:param nc_count: If the call completes successfully, the pointed address will contain the number of NeuronCores used by the model.
.. c:function:: NRT_STATUS nrt_get_model_tensor_info(nrt_model_t *model, nrt_tensor_info_array_t **tensor_info)
Gets input/output tensor information for a given loaded model.
:param model: Valid pointer to an ``nrt_model_t``.
:param tensor_info: Pointer to a ``nrt_tensor_info_array_t*`` which will contain the tensor information data. The function allocates memory for the structure internally which can only be correctly freed by calling ``nrt_free_model_tensor_info``.
The ``nrt_tensor_info_array_t`` struct and its dependencies are defined as follows:
.. code-block:: c
typedef struct nrt_tensor_info_array {
uint64_t tensor_count; // Total number of input/output tensors used by the model
nrt_tensor_info_t tensor_array[]; // Array of tensor info representing those tensors
} nrt_tensor_info_array_t;
typedef struct nrt_tensor_info {
char name[NRT_TENSOR_NAME_MAX]; // Name of the tensor
nrt_tensor_usage_t usage; // Type of the tensor
size_t size; // Tensor size in bytes
nrt_dtype_t dtype; // Data type
uint32_t *shape; // An array representing data shape
uint32_t ndim; // The number of dimensions (number of elements in the shape array)
} nrt_tensor_info_t;
// Usage type definitions for tensors
typedef enum nrt_tensor_usage {
NRT_TENSOR_USAGE_INPUT = 0, // Tensor is used for input
NRT_TENSOR_USAGE_OUTPUT, // Tensor is used for output
} nrt_tensor_usage_t;
// Data type definitions for tensors
typedef enum nrt_dtype {
NRT_DTYPE_UNKNOWN = 0,
NRT_DTYPE_FLOAT32,
NRT_DTYPE_FLOAT16,
NRT_DTYPE_BFLOAT16,
NRT_DTYPE_INT8,
NRT_DTYPE_UINT8,
NRT_DTYPE_INT16,
NRT_DTYPE_UINT16,
NRT_DTYPE_INT32,
NRT_DTYPE_UINT32,
NRT_DTYPE_INT64,
NRT_DTYPE_UINT64
} nrt_dtype_t;
.. c:function:: NRT_STATUS nrt_free_model_tensor_info(nrt_tensor_info_array_t *tensor_info)
Frees a ``nrt_tensor_info_array_t`` allocated by a call to ``nrt_get_model_tensor_info``. As with all deallocation functions, don’t call it more than once on the same pointer.
:param tensor_info: ``nrt_tensor_info_array_t`` to deallocate.
.. c:function:: NRT_STATUS nrt_get_model_instance_count(nrt_model_t *model, uint32_t *instance_count)
Returns the number of times this `nrt_model_t `is currently staged on the NeuronDevice(s) by writing it to the address pointed by ``instance_count``. It will always be >= 1. This value can be used to determine the number of threads that can optimally call ``nrt_execute`` on this ``nrt_model_t``.
:param model: Valid pointer to an ``nrt_model_t``.
:param instance_count: If the call completes successfully, the address will contain the instance count for this model
.. _api_tensor:
The Tensor API
^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_tensor_allocate(nrt_tensor_placement_t tensor_placement, int logical_nc_id, size_t size, const char *name, nrt_tensor_t **tensor)
Allocates a new tensor, placing it in either host virtual memory or device memory (based on the ``tensor_placement`` argument), on the specified NeuronCore index, of a given size, and attaches the given name to it - the name is only used for log messages.
For applications running on Inferentia, ``tensor_placement`` should always be ``NRT_TENSOR_PLACEMENT_VIRTUAL``. For all other cases, ``NRT_TENSOR_PLACEMENT_DEVICE`` should be used. If successful, the ``tensor`` address will contain a valid pointer to the newly allocated ``nrt_tensor_t``.
:param tensor_placement: Controls where the tensor will be placed, the definition of the ``nrt_tensor_placement_t`` enum is as follows:
.. code-block:: c
typedef enum {
NRT_TENSOR_PLACEMENT_DEVICE, // the tensor is allocated directly in device memory
NRT_TENSOR_PLACEMENT_HOST, // the tensor is allocated in DMAable host memory (only for sizes < 4MB)
NRT_TENSOR_PLACEMENT_VIRTUAL // the tensor is allocated in host memory
} nrt_tensor_placement_t;
:param int logical_nc_id: Zero-based NeuronCore index on which to allocate the tensor (if ``tensor_placement`` is ``NRT_TENSOR_PLACEMENT_DEVICE``) or to which associate the tensor for all other cases.
:param size: Size for the new tensor.
:param name: Name for the new tensor.
:param tensor: If the call completes successfully, the address will contain a valid ``nrt_tensor_t*`` pointer.
.. c:function:: void nrt_tensor_free(nrt_tensor_t **tensor)
Frees a tensor allocated by a call to ``nrt_tensor_allocate`` and sets the nrt_tensor_t* pointer at address ``tensor`` to NULL.
:param tensor: Pointer to a pointer to a previously allocated nrt_model_t. After the call returns, the ``nrt_model_t*`` pointer will be NULL.
.. c:function:: NRT_STATUS nrt_tensor_read(const nrt_tensor_t *tensor, void *buf, size_t offset, size_t size)
Reads ``size`` bytes of data from a given tensor, starting at ``offset``, to ``buf`` starting at offset 0. ``buf`` needs to be allocated with a size of at least ``size`` bytes.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:param buf: Buffer where to write read data, it needs to be at least `size` bytes in size.
:param offset: Offset within the tensor from which to begin reading.
:param size: Size to read.
.. c:function:: NRT_STATUS nrt_tensor_write(nrt_tensor_t *tensor, const void *buf, size_t offset, size_t size)
Writes ``size`` bytes of data to a given tensor, starting at ``offset``, from ``buf`` (starting at offset 0).
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:param buf: Buffer containing ``size`` bytes of data to write to the tensor.
:param offset: Offset within the tensor from which to begin writing.
:param size: Size to write.
.. c:function:: size_t nrt_tensor_get_size(const nrt_tensor_t *tensor)
Returns the size, in bytes, of the given tensor.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:returns: Size in bytes of the given tensor.
.. c:function:: NRT_STATUS nrt_tensor_allocate_empty(const char *name, nrt_tensor_t **tensor)
Allocates an empty tensor, i.e. the tensor structure w/o any attached storage.
:param name: Name for the new tensor.
:param tensor: If the call completes successfully, the address will contain a valid ``nrt_tensor_t*`` pointer.
.. c:function:: NRT_STATUS nrt_tensor_attach_buffer(nrt_tensor_t *tensor, void *buffer, size_t size)
Attaches a caller-supplied buffer to a tensor. Any storage previously attached to the tensor is detached and freed if was owned by the tensor.
The attached buffer is managed by the caller and must persist through the entire lifetime of the tensor - calling `nrt_tensor_free` will not deallocate it.
This changes the memory placement of the nrt_tensor_t to ``NRT_TENSOR_PLACEMENT_VIRTUAL`` regardless of the initial memory placement type.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:param buffer: Buffer of ``size`` bytes to attach to the tensor.
:param size: Size of attached buffer.
.. c:function:: NRT_STATUS nrt_tensor_allocate_slice(const nrt_tensor_t *tensor_source, size_t offset, size_t size, const char *name, nrt_tensor_t **tensor_slice)
Allocates a new ``nrt_tensor_t`` that doesn’t have its own backing storage - instead, it will use a part (slice) of ``tensor_source``’s storage, starting at ``offset``
with the given size. The shared backing storage is reference counted and it will not be deallocated until the last tensor using it is deallocated.
:param tensor_source: Valid pointer to a ``nrt_tensor_t`` whose storage will be used by the new tensor.
:param offset: Offset within the ``tensor_source`` used as origin for the 'slice'.
:param size: Size of storage to be used by the new tensor.
:param name: Name for the new tensor.
:param tensor_slice: If the call completes successfully, the address will contain a valid, newly allocated, ``nrt_tensor_t*`` pointer.
.. c:function:: void *nrt_tensor_get_va(const nrt_tensor_t *tensor)
Returns the virtual address for an allocated tensor.
:param tensor: Valid pointer to an ``nrt_tensor_t``.
:returns: Pointer to host memory used by the tensor.
.. _api_tensorset:
The Tensorset API
~~~~~~~~~~~~~~~~~
Tensorsets are containers for tensors.
.. c:function:: NRT_STATUS nrt_allocate_tensor_set(nrt_tensor_set_t **result)
Allocates an empty ``nrt_tensor_set_t`` and places its address in ``result``.
:param result: If the call completes successfully, this address will contain a pointer to a valid, newly allocated ``nrt_tensor_set_t``.
.. c:function:: void nrt_destroy_tensor_set(nrt_tensor_set_t **tensor_set)
Frees a tensor set allocated by a call to ``nrt_allocate_tensor_set`` and sets the ``nrt_tensor_set_t*`` pointer at address ``tensor_set`` to NULL.
:param tensor_set: Pointer to a pointer to a previously allocated ``nrt_tensor_set_t``. After the call returns, the ``nrt_tensor_set_t*`` pointer will be NULL.
.. c:function:: NRT_STATUS nrt_add_tensor_to_tensor_set(nrt_tensor_set_t *tensor_set, const char *tensor_name, nrt_tensor_t *tensor)
Adds an ``nrt_tensor`` to a tensor_set under a given name. That name can be later used to retrieve the tensor.
:param tensor_set: Pointer to a valid Tensorset where to add the tensor.
:param tensor_name: Name that will be used to access the added tensor in the container. Does not need to be the same as the ``nrt_tensor_t``’s name.
:param tensor: Pointer to a valid ``nrt_tensor_t`` to ad to the Tensorset.
.. c:function:: NRT_STATUS nrt_get_tensor_from_tensor_set(nrt_tensor_set_t *tensor_set, const char *tensor_name, nrt_tensor_t **tensor)
Gets an ``nrt_tensor`` from the tensor set based on the name used when it was added by ``nrt_add_tensor_to_tensor_set`` and places its address
at the address pointed by ``tensor``. If the tensor is not found, ``NRT_FAILURE`` is returned and nothing gets written at the address pointed by ``tensor``.
:param tensor_set: Pointer to a valid Tensorset containing the tensor.
:param tensor_name: Name associated with the searched ``nrt_tensor_t`` when it was added to this Tensorset. Might be different from the ``nrt_tensor_t``’s internal name.
:param tensor: Address where the address of the found ``nrt_tensor_t`` will be placed.
.. _api_exec:
The Execution API
^^^^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_execute(nrt_model_t *model, const nrt_tensor_set_t *input_set, nrt_tensor_set_t *output_set)
Runs one execution of the given ``nrt_model_t`` using the provided input tensor set and writing the results to the provided output tensor set.
:param model: Valid pointer to a `nrt_model_t` on which to run the execution.
:param input_set: Tensorset containing input data.
:param input_set: Tensor set where the output data will be written to.
.. c:function:: NRT_STATUS nrt_execute_repeat(nrt_model_t *model, const nrt_tensor_set_t *input_set, nrt_tensor_set_t *output_set, int repeat_count)
Same as ``nrt_execute`` but it will repeat the execution ``repeat_count`` times using the outputs from the n - 1th iteration as inputs for the nth iteration.
This requires a specially compiled NEFF and it's not a commonly used call.
:param model: Valid pointer to a `nrt_model_t` on which to run the execution.
:param input_set: Tensorset containing input data.
:param input_set: Tensor set where the output data will be written to.
:param repeat_count: Number of times to repeat this execution.
.. _api_profile:
The Profiling API
^^^^^^^^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_profile_start(nrt_model_t *model, const char *filename)
Begins profiling of the execution of the given model. The profile data will be written to the file specified by the path in ``filename``.
The file will be truncated if it exists.
:param model: Valid pointer to a `nrt_model_t` which will be profiled by the Neuron Runtime during execution.
:param filename: Path to a file where the profile will be written. If the file already exists, it will be truncated.
.. c:function:: NRT_STATUS nrt_profile_stop(const char *filename)
Ends profiling of the execution of a model and writes profile data to ``filename``. ``filename`` needs to be the same path as the one used for ``nrt_profile_start``.
:param filename: Path to a file where the profile will be written. If the file already exists, it will be truncated.
Other APIs
^^^^^^^^^^
.. c:function:: NRT_STATUS nrt_get_version(nrt_version_t *ver, size_t size)
Fills a ``nrt_version_t`` struct with the provided size with version info. The ``size`` argument allows for backwards compatibility.
if the struct changes in future releases.
:param *ver: Pointer to a ``nrt_version_t`` structure which is currently defined as:
.. code-block:: c
typedef struct nrt_version {
uint64_t rt_major; // major version number
uint64_t rt_minor; // minor version number
uint64_t rt_patch; // patch version number
uint64_t rt_maintenance; // maintainance version number
char rt_detail[RT_VERSION_DETAIL_LEN]; // runtime version description string
char git_hash[GIT_HASH_LEN]; // runtime git hash
} nrt_version_t;
:param size_t size: Size of the ``nrt_version_t`` structure, should always be ``sizeof(nrt_version_t)``
.. c:function:: NRT_STATUS nrt_get_total_nc_count(uint32_t *nc_count)
Gets the total number of NeuronCores present on the current instance. The result is not affected by the values in
``NEURON_RT_NUM_CORES`` or ``NEURON_RT_VISIBLE_CORES`` and, in fact, this function can be called before calling ``nrt_init``.
:param nc_count: If the call completes successfully, the address will contain the total number of NeuronCores present on the instance.
.. c:function:: NRT_STATUS nrt_get_visible_nc_count(uint32_t *nc_count)
Gets the total number of NeuronCores available to the application after ``nrt_init`` has parsed the configuration environment variables ``NEURON_RT_NUM_CORES`` and ``NEURON_RT_VISIBLE_CORES``
(if provided).
:param nc_count: If the call completes successfully, the address will contain the total number of NeuronCores available to the application.
.. |nd_v1| image:: ../images/neuron-rt-nd-v1.png
.. |nrt_arch| image:: ../images/neuron-rt-architecture.png
.. |nrt_neff| image:: ../images/neuron-rt-neff.png
.. |nrt_neff_s| image:: ../images/neuron-rt-neff-s.png
.. |nrt_neff_m| image:: ../images/neuron-rt-neff-m.png
.. |nrt_neff_single| image:: ../images/neuron-rt-neff-single.png
</pre></body></html>
|
2023-09-29T20:54:57.164Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/nrt-configurable-parameters.rst.txt
|
```
.. _nrt-configuration:
Neuron Runtime Configuration
============================
Neuron Runtime is responsible for executing ML models on Neuron Devices. Neuron Runtime determines which NeuronCore will execute which model and how to execute it.
Configuration of the Neuron Runtime is controlled through the use of Environment variables at the process level. By default, Neuron framework extensions will take care of Neuron Runtime configuration on the user's behalf. Explicit configurations are also possible when attempting to achieve a desired behavior.
This guide provides an overview of the different environment variables available to
configure Neuron Runtime behavior.
.. list-table:: Environment Variables
:widths: 25 60 20 50 20 50
:header-rows: 1
* - Name
- Description
- Type
- Expected Values
- Default Value
- RT Version
* - ``NEURON_RT_VISIBLE_CORES``
- Range of specific NeuronCores needed by the process
- Integer range (like 1-3)
- Any value or range between 0 to Max NeuronCore in the system.
- None
- 2.0+
* - ``NEURON_RT_NUM_CORES``
- Number of NeuronCores required by the process.
- Integer
- A value from 1 to Max NeuronCore in the system.
- 0, which is interpretted as "all"
- 2.0+
* - ``NEURON_RT_LOG_LOCATION``
- Runtime log location
- string
- console or syslog
- console
- 2.0+
* - ``NEURON_RT_LOG_LEVEL``
- Runtime log verbose level
- string
- ERROR, WARNING, INFO, DEBUG, TRACE
- ERROR
- 2.0+
* - ``NEURON_RT_EXEC_TIMEOUT``
- Timeout for execution in seconds
- Integer
- 0 to INT_MAX
- 30 on inf1, 600 on trn1/inf2
- 2.0+
* - ``NEURON_RT_VALIDATE_HASH``
- Validate NEFF contents before loading into accelerator
- Boolean
- TRUE or FALSE
- FALSE
- 2.0+
* - ``NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS``
- Share weights when loading multiple instance versions of the same model on different NeuronCores
- Boolean
- TRUE or FALSE
- FALSE
- 2.11+
* - ``NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS``
- Controls number of asynchronous execution requests to be supported.
- Integer
- 0 to INT_MAX; 0 is disabled.
- 0
- 2.15+
NeuronCore Allocation
---------------------
.. important ::
``NEURONCORE_GROUP_SIZES`` is being deprecated, if your application is using ``NEURONCORE_GROUP_SIZES`` please
see :ref:`neuron-migrating-apps-neuron-to-libnrt` for more details.
By default, Neuron Runtime initializes all the cores present in the system and reserves them for the current process.
.. note::
Once a NeuronCore is reserved for a process, it cannot be used by another process at all, until the process reserving that NeuronCore is terminated.
Using NEURON_RT_VISIBLE_CORES
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For parallel processing, ``NEURON_RT_VISIBLE_CORES`` can be used to control which NeuronCores each process would reserve. This variable is specified with a single NeuronCore index or an inclusive range value.
For example, if a process (myapp.py) requires one NeuronCore, then it can be started with
``NEURON_RT_VISIBLE_CORES=0`` to limit the process to NeuronCore 0. For parallel processing, multiple process can be
started (without any change to myapp.py code) with different ``NEURON_RT_VISIBLE_CORES`` values.
Here is an example that runs myapp.py on inf1.xlarge in parallel across the four different NeuronCores available in the inf1.xlarge.
::
NEURON_RT_VISIBLE_CORES=0 myapp.py &
NEURON_RT_VISIBLE_CORES=1 myapp.py &
NEURON_RT_VISIBLE_CORES=2 myapp.py &
NEURON_RT_VISIBLE_CORES=3 myapp.py &
If myapp.py required 3 NeuronCores and was running on a inf1.6xlarge (16 NeuronCores maximum), the first instance of myapp.py could use NeuronCores 0-2, the next instance could use 3-5 and so on:
::
NEURON_RT_VISIBLE_CORES=0-2 myapp.py &
NEURON_RT_VISIBLE_CORES=3-5 myapp.py &
NEURON_RT_VISIBLE_CORES=6-8 myapp.py &
NEURON_RT_VISIBLE_CORES=9-11 myapp.py &
NEURON_RT_VISIBLE_CORES=12-14 myapp.py &
Using NEURON_RT_NUM_CORES
~~~~~~~~~~~~~~~~~~~~~~~~~
If ``NEURON_RT_NUM_CORES`` is set to a value between 1 and the maximum number of NeuronCores in the instance, Neuron Runtime will attempt to automatically reserve the number of free NeuronCores specified for the process. The difference between ``NEURON_RT_VISIBLE_CORES`` and ``NEURON_RT_NUM_CORES`` is that, ``NEURON_RT_VISIBLE_CORES`` specifies exact NeuronCores to allocate where as ``NEURON_RT_NUM_CORES`` specifies the number of NeuronCores needed and Neuron Runtime selects free NeuronCores.
Using the same example earlier where myapp.py needed 3 cores, but _which_ 3 cores was of no concern, the same application could be executed in parallel up to 5 times on an inf1.6xlarge (16 NeuronCore max):
::
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
Executing a 6th ``NEURON_RT_NUM_CORES=3 myapp.py &`` in the above example would fail as there is only a single NeuronCore still free.
Notes
~~~~~
1. Number of NeuronCores in a inferentia device is 4
2. Number of inferentia is depends on the instance size.
3. The NeuronCore index in NEURON_RT_VISIBLE_CORES starts from 0 and ends at (number of NeuronDevices * number of NeuronCores) - 1.
4. By default, ``NEURON_RT_NUM_CORES`` is set to ``0``, which indicates to RT that all cores are to be used.
5. NEURON_RT_VISIBLE_CORES takes precedence over NEURON_RT_NUM_CORES. If specified, all cores within the range will be assigned to the owning process.
Logging and debug-ability
-------------------------
By default, Neuron Runtime logs to syslog with verbose level of *INFO* and only *ERROR* s are logged in console.
The following code snippet shows ways to increase/decrease the log level.
::
NEURON_RT_LOG_LEVEL=INFO myapp.py # Sets the log level for syslog and console to INFO
NEURON_RT_LOG_LOCATION=console NEURON_RT_LOG_LEVEL=QUIET myapp.py # Completely disables console logging.
By default, Neuron Runtime expects the NeuronCore to complete execution of any model with in 2 seconds.
If NeuronCore didnt complete the execution within 2 seconds then runtime would fail the execution with timeout error.
Most of the models takes few milliseconds to complete so 2 seconds(2000 milliseconds) is more than adequate.
However if your model is expected to run more than 2 seconds then you can increase the timeout with NEURON_RT_EXEC_TIMEOUT.
::
NEURON_RT_EXEC_TIMEOUT=5 myapp.py # increases the timeout to 5 seconds
Checksum
--------
To execute a model(NEFF), Neuron Runtime needs to load the NEFF file into NeuronCore and run.
Neuron Runtime provides a way to do checksum validation on each NEFF file while loading to validate the file is not corrupted.
This option is off by default to avoid performance penalty during model load time(~50%).
::
NEURON_RT_VALIDATE_HASH=true myapp1.py # enables model checksum validation while loading
NEURON_RT_VALIDATE_HASH=false myapp2.py # disables(default) model checksum validation while loading
Shared Weights (NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS)
--------------------------------------------------------
By default, Neuron Runtime will make copies of modle weights when loading the same instance of a model to multiple NeuronCores. Changing this default to a weight sharing mechanism is possible with Neuron Runtime 2.11 or higher by setting ``NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS=TRUE``. Use of this flag will allow for more models to be loaded by reducing the memory requirements, but will potentially come at a cost of throughput by forcing the execution across cores to compete for memory bandwidth.
Note: the use of this flag requires the model to be loaded with the multi-instance feature.
::
NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS=TRUE myapp1.py # enables model weight sharing
NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS=FALSE myapp2.py # disables(default) model weight sharing
Aynchronous Execution (NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS)
--------------------------------------------------------
An experimental asynchronous execution feature which can reduce latency by roughly 12% for training workloads. Starting in Neuron Runtime version 2.15, the feature is available, but disabled. To enable the feature for possible improvement, recommendation is to set NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS to 3. Setting the number of inflight requests above 3 may lead to Out-Of-Memory (OOM) errors during execution. For developers using libnrt.so directly, please use nrt_register_async_exec_callback to register a callback for the nrt execution thread to post the execution status to. A default callback will be registered if one is not set by the developer.
::
NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=3 myapp.py # Up to 3 async exec requests at once.
NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=0 myapp.py # disables async execution (default behavior)
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _nrt-configuration:
Neuron Runtime Configuration
============================
Neuron Runtime is responsible for executing ML models on Neuron Devices. Neuron Runtime determines which NeuronCore will execute which model and how to execute it.
Configuration of the Neuron Runtime is controlled through the use of Environment variables at the process level. By default, Neuron framework extensions will take care of Neuron Runtime configuration on the user's behalf. Explicit configurations are also possible when attempting to achieve a desired behavior.
This guide provides an overview of the different environment variables available to
configure Neuron Runtime behavior.
.. list-table:: Environment Variables
:widths: 25 60 20 50 20 50
:header-rows: 1
* - Name
- Description
- Type
- Expected Values
- Default Value
- RT Version
* - ``NEURON_RT_VISIBLE_CORES``
- Range of specific NeuronCores needed by the process
- Integer range (like 1-3)
- Any value or range between 0 to Max NeuronCore in the system.
- None
- 2.0+
* - ``NEURON_RT_NUM_CORES``
- Number of NeuronCores required by the process.
- Integer
- A value from 1 to Max NeuronCore in the system.
- 0, which is interpretted as "all"
- 2.0+
* - ``NEURON_RT_LOG_LOCATION``
- Runtime log location
- string
- console or syslog
- console
- 2.0+
* - ``NEURON_RT_LOG_LEVEL``
- Runtime log verbose level
- string
- ERROR, WARNING, INFO, DEBUG, TRACE
- ERROR
- 2.0+
* - ``NEURON_RT_EXEC_TIMEOUT``
- Timeout for execution in seconds
- Integer
- 0 to INT_MAX
- 30 on inf1, 600 on trn1/inf2
- 2.0+
* - ``NEURON_RT_VALIDATE_HASH``
- Validate NEFF contents before loading into accelerator
- Boolean
- TRUE or FALSE
- FALSE
- 2.0+
* - ``NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS``
- Share weights when loading multiple instance versions of the same model on different NeuronCores
- Boolean
- TRUE or FALSE
- FALSE
- 2.11+
* - ``NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS``
- Controls number of asynchronous execution requests to be supported.
- Integer
- 0 to INT_MAX; 0 is disabled.
- 0
- 2.15+
NeuronCore Allocation
---------------------
.. important ::
``NEURONCORE_GROUP_SIZES`` is being deprecated, if your application is using ``NEURONCORE_GROUP_SIZES`` please
see :ref:`neuron-migrating-apps-neuron-to-libnrt` for more details.
By default, Neuron Runtime initializes all the cores present in the system and reserves them for the current process.
.. note::
Once a NeuronCore is reserved for a process, it cannot be used by another process at all, until the process reserving that NeuronCore is terminated.
Using NEURON_RT_VISIBLE_CORES
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For parallel processing, ``NEURON_RT_VISIBLE_CORES`` can be used to control which NeuronCores each process would reserve. This variable is specified with a single NeuronCore index or an inclusive range value.
For example, if a process (myapp.py) requires one NeuronCore, then it can be started with
``NEURON_RT_VISIBLE_CORES=0`` to limit the process to NeuronCore 0. For parallel processing, multiple process can be
started (without any change to myapp.py code) with different ``NEURON_RT_VISIBLE_CORES`` values.
Here is an example that runs myapp.py on inf1.xlarge in parallel across the four different NeuronCores available in the inf1.xlarge.
::
NEURON_RT_VISIBLE_CORES=0 myapp.py &
NEURON_RT_VISIBLE_CORES=1 myapp.py &
NEURON_RT_VISIBLE_CORES=2 myapp.py &
NEURON_RT_VISIBLE_CORES=3 myapp.py &
If myapp.py required 3 NeuronCores and was running on a inf1.6xlarge (16 NeuronCores maximum), the first instance of myapp.py could use NeuronCores 0-2, the next instance could use 3-5 and so on:
::
NEURON_RT_VISIBLE_CORES=0-2 myapp.py &
NEURON_RT_VISIBLE_CORES=3-5 myapp.py &
NEURON_RT_VISIBLE_CORES=6-8 myapp.py &
NEURON_RT_VISIBLE_CORES=9-11 myapp.py &
NEURON_RT_VISIBLE_CORES=12-14 myapp.py &
Using NEURON_RT_NUM_CORES
~~~~~~~~~~~~~~~~~~~~~~~~~
If ``NEURON_RT_NUM_CORES`` is set to a value between 1 and the maximum number of NeuronCores in the instance, Neuron Runtime will attempt to automatically reserve the number of free NeuronCores specified for the process. The difference between ``NEURON_RT_VISIBLE_CORES`` and ``NEURON_RT_NUM_CORES`` is that, ``NEURON_RT_VISIBLE_CORES`` specifies exact NeuronCores to allocate where as ``NEURON_RT_NUM_CORES`` specifies the number of NeuronCores needed and Neuron Runtime selects free NeuronCores.
Using the same example earlier where myapp.py needed 3 cores, but _which_ 3 cores was of no concern, the same application could be executed in parallel up to 5 times on an inf1.6xlarge (16 NeuronCore max):
::
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
NEURON_RT_NUM_CORES=3 myapp.py &
Executing a 6th ``NEURON_RT_NUM_CORES=3 myapp.py &`` in the above example would fail as there is only a single NeuronCore still free.
Notes
~~~~~
1. Number of NeuronCores in a inferentia device is 4
2. Number of inferentia is depends on the instance size.
3. The NeuronCore index in NEURON_RT_VISIBLE_CORES starts from 0 and ends at (number of NeuronDevices * number of NeuronCores) - 1.
4. By default, ``NEURON_RT_NUM_CORES`` is set to ``0``, which indicates to RT that all cores are to be used.
5. NEURON_RT_VISIBLE_CORES takes precedence over NEURON_RT_NUM_CORES. If specified, all cores within the range will be assigned to the owning process.
Logging and debug-ability
-------------------------
By default, Neuron Runtime logs to syslog with verbose level of *INFO* and only *ERROR* s are logged in console.
The following code snippet shows ways to increase/decrease the log level.
::
NEURON_RT_LOG_LEVEL=INFO myapp.py # Sets the log level for syslog and console to INFO
NEURON_RT_LOG_LOCATION=console NEURON_RT_LOG_LEVEL=QUIET myapp.py # Completely disables console logging.
By default, Neuron Runtime expects the NeuronCore to complete execution of any model with in 2 seconds.
If NeuronCore didnt complete the execution within 2 seconds then runtime would fail the execution with timeout error.
Most of the models takes few milliseconds to complete so 2 seconds(2000 milliseconds) is more than adequate.
However if your model is expected to run more than 2 seconds then you can increase the timeout with NEURON_RT_EXEC_TIMEOUT.
::
NEURON_RT_EXEC_TIMEOUT=5 myapp.py # increases the timeout to 5 seconds
Checksum
--------
To execute a model(NEFF), Neuron Runtime needs to load the NEFF file into NeuronCore and run.
Neuron Runtime provides a way to do checksum validation on each NEFF file while loading to validate the file is not corrupted.
This option is off by default to avoid performance penalty during model load time(~50%).
::
NEURON_RT_VALIDATE_HASH=true myapp1.py # enables model checksum validation while loading
NEURON_RT_VALIDATE_HASH=false myapp2.py # disables(default) model checksum validation while loading
Shared Weights (NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS)
--------------------------------------------------------
By default, Neuron Runtime will make copies of modle weights when loading the same instance of a model to multiple NeuronCores. Changing this default to a weight sharing mechanism is possible with Neuron Runtime 2.11 or higher by setting ``NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS=TRUE``. Use of this flag will allow for more models to be loaded by reducing the memory requirements, but will potentially come at a cost of throughput by forcing the execution across cores to compete for memory bandwidth.
Note: the use of this flag requires the model to be loaded with the multi-instance feature.
::
NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS=TRUE myapp1.py # enables model weight sharing
NEURON_RT_MULTI_INSTANCE_SHARED_WEIGHTS=FALSE myapp2.py # disables(default) model weight sharing
Aynchronous Execution (NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS)
--------------------------------------------------------
An experimental asynchronous execution feature which can reduce latency by roughly 12% for training workloads. Starting in Neuron Runtime version 2.15, the feature is available, but disabled. To enable the feature for possible improvement, recommendation is to set NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS to 3. Setting the number of inflight requests above 3 may lead to Out-Of-Memory (OOM) errors during execution. For developers using libnrt.so directly, please use nrt_register_async_exec_callback to register a callback for the nrt execution thread to post the execution status to. A default callback will be registered if one is not set by the developer.
::
NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=3 myapp.py # Up to 3 async exec requests at once.
NEURON_RT_ASYNC_EXEC_MAX_INFLIGHT_REQUESTS=0 myapp.py # disables async execution (default behavior)
</pre></body></html>
|
2023-09-29T20:54:57.244Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuronx-cc.rst.txt
|
```
.. _neuronx-cc-index:
Neuron Compiler for Trn1 & Inf2
===============================
.. toctree::
:maxdepth: 1
API Reference Guide </compiler/neuronx-cc/api-reference-guide>
Developer Guide </compiler/neuronx-cc/developer-guide>
Misc </compiler/neuronx-cc/misc-neuronx-cc>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-cc-index:
Neuron Compiler for Trn1 & Inf2
===============================
.. toctree::
:maxdepth: 1
API Reference Guide </compiler/neuronx-cc/api-reference-guide>
Developer Guide </compiler/neuronx-cc/developer-guide>
Misc </compiler/neuronx-cc/misc-neuronx-cc></pre></body></html>
|
2023-09-29T20:54:57.591Z
|
|
TensorFlow Tutorials — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/index.html#tensorflow-tutorials
|
# TensorFlow Tutorials — AWS Neuron Documentation
Toggle in-page Table of Contents
Contents
- [Before running a tutorial](#before-running-a-tutorial)
- [Computer Vision](#computer-vision)
- [Natural Language Processing](#natural-language-processing)
- [Utilizing Neuron Capabilities](#utilizing-neuron-capabilities)
## Contents
- [Before running a tutorial](#before-running-a-tutorial)
- [Computer Vision](#computer-vision)
- [Natural Language Processing](#natural-language-processing)
- [Utilizing Neuron Capabilities](#utilizing-neuron-capabilities)
_This document is relevant for_: `Inf1`
## TensorFlow Tutorials[#](#tensorflow-tutorials "Permalink to this headline")
## Before running a tutorial[#](#before-running-a-tutorial "Permalink to this headline")
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at [TensorFlow Tutorial Setup](tensorflow-tutorial-setup.html#tensorflow-tutorial-setup) before running a TensorFlow tutorial on Inferentia. We recommend new users start with the ResNet-50 tutorial.
## Computer Vision[#](#computer-vision "Permalink to this headline")
- Tensorflow 1.x - OpenPose tutorial [\[html\]](../../../../src/examples/tensorflow/openpose_demo/openpose.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/openpose_demo/openpose.ipynb)
- Tensorflow 1.x - ResNet-50 tutorial [\[html\]](../../../../src/examples/tensorflow/tensorflow_resnet50/resnet50.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb)
- Tensorflow 1.x - YOLOv4 tutorial [\[html\]](yolo_v4_demo/yolo_v4_demo.html#tensorflow-yolo4) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb)
- Tensorflow 1.x - YOLOv3 tutorial [\[html\]](../../../../src/examples/tensorflow/yolo_v3_demo/yolo_v3.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb)
- Tensorflow 1.x - SSD300 tutorial [\[html\]](ssd300_demo/ssd300_demo.html#tensorflow-ssd300)
- Tensorflow 1.x - Keras ResNet-50 optimization tutorial [\[html\]](../../../../src/examples/tensorflow/keras_resnet50/keras_resnet50.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb)
## Natural Language Processing[#](#natural-language-processing "Permalink to this headline")
- Tensorflow 1.x - Running TensorFlow BERT-Large with AWS Neuron [\[html\]](bert_demo/bert_demo.html#tensorflow-bert-demo)
- Tensorflow 2.x - HuggingFace DistilBERT with Tensorflow2 Neuron [\[html\]](../../../../src/examples/tensorflow/huggingface_bert/huggingface_bert.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb)
## Utilizing Neuron Capabilities[#](#utilizing-neuron-capabilities "Permalink to this headline")
- Tensorflow 1.x & 2.x - Using NEURON\_RT\_VISIBLE\_CORES with TensorFlow Serving [\[html\]](../../../../src/examples/tensorflow/tensorflow_serving_tutorial.html)
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TensorFlow Tutorials — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/tensorflow/tensorflow-neuron/tutorials/index", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/tensorflow/tensorflow-neuron/tutorials/index.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/tensorflow/tensorflow-neuron/tutorials/index.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/frameworks/tensorflow/tensorflow-neuron/tutorials/index.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#before-running-a-tutorial">
Before running a tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#computer-vision">
Computer Vision
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#natural-language-processing">
Natural Language Processing
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#utilizing-neuron-capabilities">
Utilizing Neuron Capabilities
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>TensorFlow Tutorials</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#before-running-a-tutorial">
Before running a tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#computer-vision">
Computer Vision
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#natural-language-processing">
Natural Language Processing
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#utilizing-neuron-capabilities">
Utilizing Neuron Capabilities
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="tensorflow-tutorials">
<span id="id1"></span><h1>TensorFlow Tutorials<a class="headerlink" href="#tensorflow-tutorials" title="Permalink to this headline">#</a></h1>
<div class="section" id="before-running-a-tutorial">
<h2>Before running a tutorial<a class="headerlink" href="#before-running-a-tutorial" title="Permalink to this headline">#</a></h2>
<p>You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.</p>
<p>Follow instructions at <a class="reference internal" href="tensorflow-tutorial-setup.html#tensorflow-tutorial-setup"><span class="std std-ref">TensorFlow Tutorial Setup</span></a> before running a TensorFlow tutorial on Inferentia. We recommend new users start with the ResNet-50 tutorial.</p>
<div class="toctree-wrapper compound">
</div>
</div>
<div class="section" id="computer-vision">
<span id="tensorflow-computervision"></span><h2>Computer Vision<a class="headerlink" href="#computer-vision" title="Permalink to this headline">#</a></h2>
<ul class="simple">
<li><p>Tensorflow 1.x - OpenPose tutorial <a class="reference internal" href="../../../../src/examples/tensorflow/openpose_demo/openpose.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/openpose_demo/openpose.ipynb">[notebook]</a></p></li>
<li><p>Tensorflow 1.x - ResNet-50 tutorial <a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow_resnet50/resnet50.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb">[notebook]</a></p></li>
<li><p>Tensorflow 1.x - YOLOv4 tutorial <a class="reference internal" href="yolo_v4_demo/yolo_v4_demo.html#tensorflow-yolo4"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/yolo_v4_demo/evaluate.ipynb">[notebook]</a></p></li>
<li><p>Tensorflow 1.x - YOLOv3 tutorial <a class="reference internal" href="../../../../src/examples/tensorflow/yolo_v3_demo/yolo_v3.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/yolo_v3_demo/yolo_v3.ipynb">[notebook]</a></p></li>
<li><p>Tensorflow 1.x - SSD300 tutorial <a class="reference internal" href="ssd300_demo/ssd300_demo.html#tensorflow-ssd300"><span class="std std-ref">[html]</span></a></p></li>
<li><p>Tensorflow 1.x - Keras ResNet-50 optimization tutorial <a class="reference internal" href="../../../../src/examples/tensorflow/keras_resnet50/keras_resnet50.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/keras_resnet50/keras_resnet50.ipynb">[notebook]</a></p></li>
</ul>
<div class="toctree-wrapper compound">
</div>
</div>
<div class="section" id="natural-language-processing">
<span id="tensorflow-nlp"></span><h2>Natural Language Processing<a class="headerlink" href="#natural-language-processing" title="Permalink to this headline">#</a></h2>
<ul class="simple">
<li><p>Tensorflow 1.x - Running TensorFlow BERT-Large with AWS Neuron <a class="reference internal" href="bert_demo/bert_demo.html#tensorflow-bert-demo"><span class="std std-ref">[html]</span></a></p></li>
<li><p>Tensorflow 2.x - HuggingFace DistilBERT with Tensorflow2 Neuron <a class="reference internal" href="../../../../src/examples/tensorflow/huggingface_bert/huggingface_bert.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/tensorflow/huggingface_bert/huggingface_bert.ipynb">[notebook]</a></p></li>
</ul>
<div class="toctree-wrapper compound">
</div>
</div>
<div class="section" id="utilizing-neuron-capabilities">
<span id="tensorflow-utilize-neuron"></span><h2>Utilizing Neuron Capabilities<a class="headerlink" href="#utilizing-neuron-capabilities" title="Permalink to this headline">#</a></h2>
<ul class="simple">
<li><p>Tensorflow 1.x & 2.x - Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving <a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow_serving_tutorial.html"><span class="std std-ref">[html]</span></a></p></li>
</ul>
<div class="toctree-wrapper compound">
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:57.665Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-runtime/configuration-guide.rst.txt
|
```
Configuration Guide
===================
.. toctree::
:maxdepth: 1
Runtime Configuration </neuron-runtime/nrt-configurable-parameters>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Configuration Guide
===================
.. toctree::
:maxdepth: 1
Runtime Configuration </neuron-runtime/nrt-configurable-parameters></pre></body></html>
|
2023-09-29T20:54:57.699Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/runtime/aws-neuronx-runtime-lib/index.rst.txt
|
```
.. _neuron-runtime-rn:
Neuron Runtime Release Notes
============================
Neuron Runtime consists of a kernel mode driver and C/C++ libraries which provides APIs to access Neuron Devices. The runtime itself (libnrt.so) is integrated into the ML frameworks for simplicity of deployment. The Neuron Runtime supports training models and executing inference on the Neuron Cores.
.. contents:: Table of contents
:local:
:depth: 1
Known issues
------------
Updated : 04/29/2022
- In rare cases of multi-process applications running under heavy stress a model load failure my occur. This may require reloading of the Neuron Driver as a workaround.
NEFF Support Table:
-------------------
Use this table to determine the version of Runtime that will support the
version of NEFF you are using. NEFF version is determined by the version
of the Neuron Compiler.
============ ===================== ===================================
NEFF Version Runtime Version Range Notes
============ ===================== ===================================
0.6 \* All versions of RT support NEFF 0.6
1.0 >= 1.0.6905.0 Starting support for 1.0 NEFFs
2.0 >= 1.6.5.0 Starting support for 2.0 NEFFs
============ ===================== ===================================
Neuron Runtime Library [2.17.7.0]
---------------------------------
Date: 9/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Improved logging by printing out NEFF name in debug logs of nrt_execute
Bug fixes
^^^^^^^^^
* Fixed hang that would occur when running a NEFF which contains embedding update instructions in multiple functions.
* Fixed issue where the Neuron Runtime registered the same memory multiple times to an EFA device causing applications to exceed the number of physical pages that could be registered.
* Fixed assert (``void tvm::runtime::GraphRuntime::PatchDltDataPtr(DLTensor*, uint32_t*, size_t): Assertion `tensor_get_mem_type(grt->io_tensor) == NRT_TENSOR_MEM_TYPE_MALLOC' failed.``) that occured on INF1 caused by an uninitialized pointer.
* Fixed potential hang that can occur when partial replica groups for collectives are present in a NEFF.
Neuron Runtime Library [2.16.14.0]
---------------------------------
Date: 9/01/2023
Bug fixes
^^^^^^^^^
* Fixed a segfault on failure to complete Neuron Device initialization. New behavior will avoid the failure and escalate a fixed Neuron Runtime error code (NERR_FAIL, code 0x1)
* Improved error messages around Neuron Device initialization failures.
Neuron Runtime Library [2.16.8.0]
---------------------------------
Date: 8/09/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Add runtime version and capture time to NTFF
* Improved Neuron Device copy times for all instance types via async DMA copies
* Improved error messages for unsupported topologies (example below)
global comm ([COMM ID]) has less channels than this replica group ([REPLICA GROUP ID]) :
likely not enough EFA devices found if running on multiple nodes or CC not permitted on this group [[TOPOLOGY]]
* Improved logging message for collectives timeouts by adding rank id to trace logs (example below)
[gid: [RANK ID]] exchange proxy tokens
* Improved error messages when loading NEFFs with unsupported instructions (example below)
Unsupported hardware operator code [OPCODE] found in neff.
Please make sure to upgrade to latest aws-neuronx-runtime-lib and aws-neuronx-collective; for detailed installation instructions visit Neuron documentation.
Bug fixes
^^^^^^^^^
* Fixed “failed to get neighbor input/output addr” error when loading collectives NEFF compiled with callgraph flow and NEFF without callgraph flow.
Neuron Runtime Library [2.15.14.0]
---------------------------------
Date: 8/09/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Reduced the contiguous memory size requirement for initializing Neuron Runtime on trn1/inf2 instance families by shrinking some of the notification buffers. A particularly large decrease was the reduction of a 4MB error notification buffer down to 64K. Expectation is that under memory constrained or highly fragmented memory systems, the Neuron Runtime would come up more reliably than previous versions.
Neuron Runtime Library [2.15.11.0]
---------------------------------
Date: 7/19/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added experimental asynchronous execution feature which can reduce latency by roughly 12% for training workloads. See Runtime Configuration guide for details on how to use the feature.
* AllReduce with All-to-all communication pattern enabled for 16 ranks on TRN1/TRN1N within the instance (intranode); choice of 16 ranks is limited to NeuronCores 0-15 or 16-31.
* Minor improvement in end-to-end execution latency after reducing the processing time required for benign error notifications.
* Reduced notification overhead by using descriptor packing improving DMA performance for memory bound workloads by up to 25%.
* Improved load speed by removing extraneous checks that were previously being performed during loads.
* Minor performance boost to CC Ops by removing the need to sort execution end notifications.
* Bumped profiling NTFF version to version 2 to remove duplicate information which may result in hitting protobuf limits, and avoid crashing when using an older version of Neuron tools to postprocess the profile.
Please upgrade to Neuron tools 2.12 or above to view profiles captured using this version of the Neuron runtime.
Neuron Runtime Library [2.14.8.0]
---------------------------------
Date: 6/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added All-to-All All-Reduce support for Neuron Collective operations, which is expected to improve All-Reduce performance by 3-7x in most cases.
* Added more descriptive NEURON_SCRATCHPAD_PAGE_SIZE to eventually replace NEURON_RT_ONE_TMPBUF_PAGE_SIZE_MB
* Neuron Runtime is now getting the device BDF from Neuron Driver for internal use.
Bug fixes
^^^^^^^^^
* Fixed rare race condition caused by DMA memory barrier not being set for certain data transfers leading to non-determinism in outputs
* Fixed NeuronCore latency not being counted properly in Neuron metrics
* Removed stack allocation of error notifications buffer when parsing error notifications, which may lead to stack overflows on smaller stack sizes.
Neuron Runtime Library [2.13.6.0]
---------------------------------
Date: 05/01/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for internal Neuron Compiler change, Queue Set Instances, which leads to reduced NEFF footprints on Neuron Devices. In some cases, the reduction is as much as 60% smaller DMA ring size.
Bug fixes
^^^^^^^^^
* Fixed a rare fabric deadlock scenario (hang) in NeuronCore v2 related to notification events.
* Ensure tensor store writes are complete before synchronization event is set.
Neuron Runtime Library [2.12.23.0]
---------------------------------
Date: 04/19/2023
Bug fixes
^^^^^^^^^
* Minor internal bug fixes.
Neuron Runtime Library [2.12.14.0]
---------------------------------
Date: 03/28/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for 16 channels and 16 EFA devices, which is required for enabling EC2 TRN1N instances with Neuron.
* Added support for hierarchical All-Reduce and Reduce-Scatter. These implementations are now used by default and provides up to 75% reduction in latency for 2MB buffers across 256 ranks.
* Added support for loading more than one Neuron Custom Operator library.
* Added support for loading multicore Neuron Custom Operators.
* Updated INF2 to support rank 1 topology.
* Minor improvement in model load time for small models (below 100MB).
Neuron Runtime Library [2.11.43.0]
---------------------------------
Date: 02/08/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for Neuron Custom C++ operators as an experimental feature. As of this release, usage of Custom C++ operators requires a reset of the Neuron Runtime after running a model which invoked a Neuron Custom C++ operator.
* Added support for a counter that enable measuring FLOPS on neuron-top and neuron-monitor.
* Added support for LRU cache for DMA rings.
Bug fixes
^^^^^^^^^
* Fixed load failures due to memory bounds checking for Neuron Collective Compute operations in Runtime during model load.
* Fixed an internal bug that was preventing Neuron Runtime metrics from posting.
* Fixed a bug that caused segfaults as a result of double frees and stack overflows.
Neuron Runtime Library [2.10.18.0]
---------------------------------
Date: 11/07/2022
New in this release
^^^^^^^^^^^^^^^^^^^
* Minor bug fixes and enhancements.
Neuron Runtime Library [2.10.15.0]
---------------------------------
Date: 10/26/2022
.. _note::
Neuron Driver version 2.5 or newer is required for this version of Neuron Runtime Library
New in this release
^^^^^^^^^^^^^^^^^^^
* Changed default runtime behavior to reset NeuronCores when initializing applications. With this change, the reseting of the Neuron Driver after application crash is no longer necessary. The new reset functionality is controled by setting environment variable: ``NEURON_RT_RESET_CORES``, see :ref:`nrt-configuration` for more information.
Bug fixes
^^^^^^^^^
* Fixed a bug where Stochastic Rounding was not being set for collective communication operators
* Fixed an issue with triggering DMA for large tensors
* Increased default execution timeout to 30 seconds
* Fixed IOQ resetting queue to incorrect ring id value
* Updated the Neuron driver for more reliable behavior of driver device reset. Driver no longer busy waits on reset or gets stuck waiting on reset, which caused kernel taints or caused driver unload attempts to fail.
* Fixed a bug the prevented collective communication over tensors larger than 2GB
* Fixed a bug that caused intermittent memory corruption when unloading a model
* Fixed a bug that caused the exhausting of EFA memory registration pool after multiple model reloads.
Neuron Runtime Library [2.9.64.0]
---------------------------------
Date: 10/10/2022
This release specifically adds support for training workloads on one or more EC2 TRN1 instances.
Required Neuron Driver Version: 2.5 or newer
New in this release
^^^^^^^^^^^^^^^^^^^
* Broke out runtime into a separate package called aws-neuronx-runtime-lib.
* Added RUNPATH for discovery of libnrt.so, can be overridden with LD_LIBRARY_PATH.
* Added support for multiple collective compute operations, e.g. All-Reduce, Reduce-Scatter, All-Gather.
* Added Send/Recv operation support
* Added support for using multiple DMA engines with single pseudo embedding update instruction.
* Changed instruction buffer alignment to 32K.
* Reduced memory required during NEFF swapping.
* Enabled notifications for send/recv collectives operations.
* Added trace apis in support of execution profiling.
* Added support for TPB reset (default: off).
* Added version checking for libnccom (aws-neuronx-collectives).
* Added new runtime version API.
* Added 8-channel support for Trn1.
* Improved debug outputs.
* Added support for write combining on BAR4.
* Increased default execution timeout from 2 seconds to 30 seconds.
* Improved handling of zero-sized tensors
Neuron Runtime 2.x (``libnrt.so``) release [2.2.51.0]
-----------------------------------------------------
Date: 03/25/2022
* Fixed an invalid memory access that could occur when unloading models.
* Reduced severity of logging for numerical errors from ERROR to WARN.
* Improved handling of models with numerous CPU operations to avoid inference failure due to memory exhaustion.
Neuron Runtime 2.x (``libnrt.so``) release [2.2.31.0]
-----------------------------------------------------
Date: 01/20/2022
New in the release
^^^^^^^^^^^^^^^^^^
* Changed error notifications from ``WARN`` to ``ERROR`` in cases when the causing problem is non-recoverable.
* Changed handling of inference timeouts (``NERR_TIMEOUT``) to avoid failure when the timeout is related to a software thread scheduling conflict.
Bug fixes
^^^^^^^^^
* Increased the number of data queues in Neuron Runtime 2.x to match what was previously used in Neuron Runtime 1.x. The use
of fewer number of data queues in Neuron Runtime 2.x was leading to crashes in a limited number of models.
* Fixed the way Neuron Runtime 2.x updates the inference end timestamp. Previously, Neuron Runtime 2.x update of the inference
end timestamp would have lead to a negative latency statistics in neuron-monitor with certain models.
Neuron Runtime 2.x (``libnrt.so``) release [2.2.18.0]
-----------------------------------------------------
Date: 11/05/2021
- Resolved an issue that affect the use of Neuron within container. In previous Neuron Runtime release (libnrt.so.2.2.15.0), when /dev/neuron0
was not used by the application, Neuron Runtime attempted and failed to initialize /dev/neuron0 because user didn't pass /dev/neuron0 to the
container. this Neuron Runtime release (``libnrt.so.2.2.18.0``) allows customers to launch containers with specific NeuronDevices other
than /dev/neuron0.
Neuron Runtime 2.x (``libnrt.so``) release [2.2.15.0]
-----------------------------------------------------
Date: 10/27/2021
New in this release
^^^^^^^^^^^^^^^^^^^
- :ref:`First release of Neuron Runtime 2.x <introduce-libnrt>` - In this release we are
introducing Neuron Runtime 2.x which is a shared library named
(``libnrt.so``) and replacing Neuron Runtime 1.x server
(``neruon-rtd``) . Upgrading to ``libnrt.so`` improves throughput and
latency, simplifies Neuron installation and upgrade process,
introduces new capabilities for allocating NeuronCores to
applications, streamlines container creation, and deprecates tools
that are no longer needed. The new library-based runtime
(``libnrt.so``) is integrated into Neuron’s ML Frameworks (with the exception of MXNet 1.5) and Neuron
Tools packages directly - users no longer need to install/deploy the
``aws-neuron-runtime``\ package.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-runtime-rn:
Neuron Runtime Release Notes
============================
Neuron Runtime consists of a kernel mode driver and C/C++ libraries which provides APIs to access Neuron Devices. The runtime itself (libnrt.so) is integrated into the ML frameworks for simplicity of deployment. The Neuron Runtime supports training models and executing inference on the Neuron Cores.
.. contents:: Table of contents
:local:
:depth: 1
Known issues
------------
Updated : 04/29/2022
- In rare cases of multi-process applications running under heavy stress a model load failure my occur. This may require reloading of the Neuron Driver as a workaround.
NEFF Support Table:
-------------------
Use this table to determine the version of Runtime that will support the
version of NEFF you are using. NEFF version is determined by the version
of the Neuron Compiler.
============ ===================== ===================================
NEFF Version Runtime Version Range Notes
============ ===================== ===================================
0.6 \* All versions of RT support NEFF 0.6
1.0 >= 1.0.6905.0 Starting support for 1.0 NEFFs
2.0 >= 1.6.5.0 Starting support for 2.0 NEFFs
============ ===================== ===================================
Neuron Runtime Library [2.17.7.0]
---------------------------------
Date: 9/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Improved logging by printing out NEFF name in debug logs of nrt_execute
Bug fixes
^^^^^^^^^
* Fixed hang that would occur when running a NEFF which contains embedding update instructions in multiple functions.
* Fixed issue where the Neuron Runtime registered the same memory multiple times to an EFA device causing applications to exceed the number of physical pages that could be registered.
* Fixed assert (``void tvm::runtime::GraphRuntime::PatchDltDataPtr(DLTensor*, uint32_t*, size_t): Assertion `tensor_get_mem_type(grt->io_tensor) == NRT_TENSOR_MEM_TYPE_MALLOC' failed.``) that occured on INF1 caused by an uninitialized pointer.
* Fixed potential hang that can occur when partial replica groups for collectives are present in a NEFF.
Neuron Runtime Library [2.16.14.0]
---------------------------------
Date: 9/01/2023
Bug fixes
^^^^^^^^^
* Fixed a segfault on failure to complete Neuron Device initialization. New behavior will avoid the failure and escalate a fixed Neuron Runtime error code (NERR_FAIL, code 0x1)
* Improved error messages around Neuron Device initialization failures.
Neuron Runtime Library [2.16.8.0]
---------------------------------
Date: 8/09/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Add runtime version and capture time to NTFF
* Improved Neuron Device copy times for all instance types via async DMA copies
* Improved error messages for unsupported topologies (example below)
global comm ([COMM ID]) has less channels than this replica group ([REPLICA GROUP ID]) :
likely not enough EFA devices found if running on multiple nodes or CC not permitted on this group [[TOPOLOGY]]
* Improved logging message for collectives timeouts by adding rank id to trace logs (example below)
[gid: [RANK ID]] exchange proxy tokens
* Improved error messages when loading NEFFs with unsupported instructions (example below)
Unsupported hardware operator code [OPCODE] found in neff.
Please make sure to upgrade to latest aws-neuronx-runtime-lib and aws-neuronx-collective; for detailed installation instructions visit Neuron documentation.
Bug fixes
^^^^^^^^^
* Fixed “failed to get neighbor input/output addr” error when loading collectives NEFF compiled with callgraph flow and NEFF without callgraph flow.
Neuron Runtime Library [2.15.14.0]
---------------------------------
Date: 8/09/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Reduced the contiguous memory size requirement for initializing Neuron Runtime on trn1/inf2 instance families by shrinking some of the notification buffers. A particularly large decrease was the reduction of a 4MB error notification buffer down to 64K. Expectation is that under memory constrained or highly fragmented memory systems, the Neuron Runtime would come up more reliably than previous versions.
Neuron Runtime Library [2.15.11.0]
---------------------------------
Date: 7/19/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added experimental asynchronous execution feature which can reduce latency by roughly 12% for training workloads. See Runtime Configuration guide for details on how to use the feature.
* AllReduce with All-to-all communication pattern enabled for 16 ranks on TRN1/TRN1N within the instance (intranode); choice of 16 ranks is limited to NeuronCores 0-15 or 16-31.
* Minor improvement in end-to-end execution latency after reducing the processing time required for benign error notifications.
* Reduced notification overhead by using descriptor packing improving DMA performance for memory bound workloads by up to 25%.
* Improved load speed by removing extraneous checks that were previously being performed during loads.
* Minor performance boost to CC Ops by removing the need to sort execution end notifications.
* Bumped profiling NTFF version to version 2 to remove duplicate information which may result in hitting protobuf limits, and avoid crashing when using an older version of Neuron tools to postprocess the profile.
Please upgrade to Neuron tools 2.12 or above to view profiles captured using this version of the Neuron runtime.
Neuron Runtime Library [2.14.8.0]
---------------------------------
Date: 6/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added All-to-All All-Reduce support for Neuron Collective operations, which is expected to improve All-Reduce performance by 3-7x in most cases.
* Added more descriptive NEURON_SCRATCHPAD_PAGE_SIZE to eventually replace NEURON_RT_ONE_TMPBUF_PAGE_SIZE_MB
* Neuron Runtime is now getting the device BDF from Neuron Driver for internal use.
Bug fixes
^^^^^^^^^
* Fixed rare race condition caused by DMA memory barrier not being set for certain data transfers leading to non-determinism in outputs
* Fixed NeuronCore latency not being counted properly in Neuron metrics
* Removed stack allocation of error notifications buffer when parsing error notifications, which may lead to stack overflows on smaller stack sizes.
Neuron Runtime Library [2.13.6.0]
---------------------------------
Date: 05/01/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for internal Neuron Compiler change, Queue Set Instances, which leads to reduced NEFF footprints on Neuron Devices. In some cases, the reduction is as much as 60% smaller DMA ring size.
Bug fixes
^^^^^^^^^
* Fixed a rare fabric deadlock scenario (hang) in NeuronCore v2 related to notification events.
* Ensure tensor store writes are complete before synchronization event is set.
Neuron Runtime Library [2.12.23.0]
---------------------------------
Date: 04/19/2023
Bug fixes
^^^^^^^^^
* Minor internal bug fixes.
Neuron Runtime Library [2.12.14.0]
---------------------------------
Date: 03/28/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for 16 channels and 16 EFA devices, which is required for enabling EC2 TRN1N instances with Neuron.
* Added support for hierarchical All-Reduce and Reduce-Scatter. These implementations are now used by default and provides up to 75% reduction in latency for 2MB buffers across 256 ranks.
* Added support for loading more than one Neuron Custom Operator library.
* Added support for loading multicore Neuron Custom Operators.
* Updated INF2 to support rank 1 topology.
* Minor improvement in model load time for small models (below 100MB).
Neuron Runtime Library [2.11.43.0]
---------------------------------
Date: 02/08/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for Neuron Custom C++ operators as an experimental feature. As of this release, usage of Custom C++ operators requires a reset of the Neuron Runtime after running a model which invoked a Neuron Custom C++ operator.
* Added support for a counter that enable measuring FLOPS on neuron-top and neuron-monitor.
* Added support for LRU cache for DMA rings.
Bug fixes
^^^^^^^^^
* Fixed load failures due to memory bounds checking for Neuron Collective Compute operations in Runtime during model load.
* Fixed an internal bug that was preventing Neuron Runtime metrics from posting.
* Fixed a bug that caused segfaults as a result of double frees and stack overflows.
Neuron Runtime Library [2.10.18.0]
---------------------------------
Date: 11/07/2022
New in this release
^^^^^^^^^^^^^^^^^^^
* Minor bug fixes and enhancements.
Neuron Runtime Library [2.10.15.0]
---------------------------------
Date: 10/26/2022
.. _note::
Neuron Driver version 2.5 or newer is required for this version of Neuron Runtime Library
New in this release
^^^^^^^^^^^^^^^^^^^
* Changed default runtime behavior to reset NeuronCores when initializing applications. With this change, the reseting of the Neuron Driver after application crash is no longer necessary. The new reset functionality is controled by setting environment variable: ``NEURON_RT_RESET_CORES``, see :ref:`nrt-configuration` for more information.
Bug fixes
^^^^^^^^^
* Fixed a bug where Stochastic Rounding was not being set for collective communication operators
* Fixed an issue with triggering DMA for large tensors
* Increased default execution timeout to 30 seconds
* Fixed IOQ resetting queue to incorrect ring id value
* Updated the Neuron driver for more reliable behavior of driver device reset. Driver no longer busy waits on reset or gets stuck waiting on reset, which caused kernel taints or caused driver unload attempts to fail.
* Fixed a bug the prevented collective communication over tensors larger than 2GB
* Fixed a bug that caused intermittent memory corruption when unloading a model
* Fixed a bug that caused the exhausting of EFA memory registration pool after multiple model reloads.
Neuron Runtime Library [2.9.64.0]
---------------------------------
Date: 10/10/2022
This release specifically adds support for training workloads on one or more EC2 TRN1 instances.
Required Neuron Driver Version: 2.5 or newer
New in this release
^^^^^^^^^^^^^^^^^^^
* Broke out runtime into a separate package called aws-neuronx-runtime-lib.
* Added RUNPATH for discovery of libnrt.so, can be overridden with LD_LIBRARY_PATH.
* Added support for multiple collective compute operations, e.g. All-Reduce, Reduce-Scatter, All-Gather.
* Added Send/Recv operation support
* Added support for using multiple DMA engines with single pseudo embedding update instruction.
* Changed instruction buffer alignment to 32K.
* Reduced memory required during NEFF swapping.
* Enabled notifications for send/recv collectives operations.
* Added trace apis in support of execution profiling.
* Added support for TPB reset (default: off).
* Added version checking for libnccom (aws-neuronx-collectives).
* Added new runtime version API.
* Added 8-channel support for Trn1.
* Improved debug outputs.
* Added support for write combining on BAR4.
* Increased default execution timeout from 2 seconds to 30 seconds.
* Improved handling of zero-sized tensors
Neuron Runtime 2.x (``libnrt.so``) release [2.2.51.0]
-----------------------------------------------------
Date: 03/25/2022
* Fixed an invalid memory access that could occur when unloading models.
* Reduced severity of logging for numerical errors from ERROR to WARN.
* Improved handling of models with numerous CPU operations to avoid inference failure due to memory exhaustion.
Neuron Runtime 2.x (``libnrt.so``) release [2.2.31.0]
-----------------------------------------------------
Date: 01/20/2022
New in the release
^^^^^^^^^^^^^^^^^^
* Changed error notifications from ``WARN`` to ``ERROR`` in cases when the causing problem is non-recoverable.
* Changed handling of inference timeouts (``NERR_TIMEOUT``) to avoid failure when the timeout is related to a software thread scheduling conflict.
Bug fixes
^^^^^^^^^
* Increased the number of data queues in Neuron Runtime 2.x to match what was previously used in Neuron Runtime 1.x. The use
of fewer number of data queues in Neuron Runtime 2.x was leading to crashes in a limited number of models.
* Fixed the way Neuron Runtime 2.x updates the inference end timestamp. Previously, Neuron Runtime 2.x update of the inference
end timestamp would have lead to a negative latency statistics in neuron-monitor with certain models.
Neuron Runtime 2.x (``libnrt.so``) release [2.2.18.0]
-----------------------------------------------------
Date: 11/05/2021
- Resolved an issue that affect the use of Neuron within container. In previous Neuron Runtime release (libnrt.so.2.2.15.0), when /dev/neuron0
was not used by the application, Neuron Runtime attempted and failed to initialize /dev/neuron0 because user didn't pass /dev/neuron0 to the
container. this Neuron Runtime release (``libnrt.so.2.2.18.0``) allows customers to launch containers with specific NeuronDevices other
than /dev/neuron0.
Neuron Runtime 2.x (``libnrt.so``) release [2.2.15.0]
-----------------------------------------------------
Date: 10/27/2021
New in this release
^^^^^^^^^^^^^^^^^^^
- :ref:`First release of Neuron Runtime 2.x <introduce-libnrt>` - In this release we are
introducing Neuron Runtime 2.x which is a shared library named
(``libnrt.so``) and replacing Neuron Runtime 1.x server
(``neruon-rtd``) . Upgrading to ``libnrt.so`` improves throughput and
latency, simplifies Neuron installation and upgrade process,
introduces new capabilities for allocating NeuronCores to
applications, streamlines container creation, and deprecates tools
that are no longer needed. The new library-based runtime
(``libnrt.so``) is integrated into Neuron’s ML Frameworks (with the exception of MXNet 1.5) and Neuron
Tools packages directly - users no longer need to install/deploy the
``aws-neuron-runtime``\ package.
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
</pre></body></html>
|
2023-09-29T20:54:57.711Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuronx-cc/api-reference-guide.rst.txt
|
```
API Reference Guide
===================
.. toctree::
:maxdepth: 1
Neuron Compiler CLI Reference Guide </compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide
===================
.. toctree::
:maxdepth: 1
Neuron Compiler CLI Reference Guide </compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide></pre></body></html>
|
2023-09-29T20:54:57.723Z
|
|
Get Started with PyTorch Neuron — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/general/quick-start/torch-neuron.html#torch-quick-start
|
# Get Started with PyTorch Neuron — AWS Neuron Documentation
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
## Get Started with PyTorch Neuron[#](#get-started-with-pytorch-neuron "Permalink to this headline")
This page provide links that will assist you to quickly start with [PyTorch Neuron](../../frameworks/torch/index.html#pytorch-neuronx-main) for both Inference and Training.
Note
Below instructions are for Ubuntu20, if you looking for complete setup instructions for different platforms, please [Check Here.](../setup/index.html#setup-guide-index)
- Please follow the instructions at [launch an Amazon EC2 Instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance) to Launch an instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type.
- To get more information about instances sizes and pricing see: [Trn1 web page](https://aws.amazon.com/ec2/instance-types/trn1/), [Inf2 web page](https://aws.amazon.com/ec2/instance-types/inf2/), [Inf1 web page](https://aws.amazon.com/ec2/instance-types/inf1/)
- Select your Amazon Machine Image (AMI) of choice, please note that Neuron supports Amazon Linux 2 AMI(HVM) - Kernel 5.10.
- When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.
- After launching the instance, follow the instructions in [Connect to your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux) to connect to the instance
Note
If you are facing a connectivity issue during the model loading process on a Trn1 instance with Ubuntu, that could probably be because of Ubuntu limitations with multiple interfaces. To solve this problem, please follow the steps mentioned [here](../../frameworks/torch/torch-neuronx/training-troubleshooting.html#trn1-ubuntu-troubleshooting).
Users are highly encouraged to use DLAMI to launch the instances, since DLAMIs come with the required fix.
```
# Configure Linux for Neuron repository updates
. /etc/os-release
sudo tee /etc/apt/sources.list.d/neuron.list > /dev/null <<EOF
deb https://apt.repos.neuron.amazonaws.com ${VERSION_CODENAME} main
EOF
wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add -
# Update OS packages
sudo apt-get update -y
# Install OS headers
sudo apt-get install linux-headers-$(uname -r) -y
# Install git
sudo apt-get install git -y
# install Neuron Driver
sudo apt-get install aws-neuronx-dkms=2.* -y
# Install Neuron Runtime
sudo apt-get install aws-neuronx-collectives=2.* -y
sudo apt-get install aws-neuronx-runtime-lib=2.* -y
# Install Neuron Tools
sudo apt-get install aws-neuronx-tools=2.* -y
# Add PATH
export PATH=/opt/aws/neuron/bin:$PATH
```
torch-neuronx (`Trn1, Inf2`)
torch-neuron (`Inf1`)
```
# Install Python venv
sudo apt-get install -y python3.8-venv g++
# Create Python venv
python3.8 -m venv aws_neuron_venv_pytorch_inf1
# Activate Python venv
source aws_neuron_venv_pytorch_inf1/bin/activate
python -m pip install -U pip
# Install Jupyter notebook kernel
pip install ipykernel
python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install PyTorch Neuron
python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision
```
- Torchvision ResNet50 tutorial [\[html\]](../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.ipynb)
Visit PyTorch Neuron section for more
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Get Started with PyTorch Neuron — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../_static/pygments.css">
<link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script>
<script src="../../_static/jquery.js"></script>
<script src="../../_static/underscore.js"></script>
<script src="../../_static/doctools.js"></script>
<script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../_static/contentui.js"></script>
<script src="../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../genindex.html">
<link rel="search" title="Search" href="../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "general/quick-start/torch-neuron", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fgeneral/quick-start/torch-neuron.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/general/quick-start/torch-neuron.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../_sources/general/quick-start/torch-neuron.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Get Started with PyTorch Neuron</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="get-started-with-pytorch-neuron">
<span id="torch-quick-start"></span><h1>Get Started with PyTorch Neuron<a class="headerlink" href="#get-started-with-pytorch-neuron" title="Permalink to this headline">#</a></h1>
<p>This page provide links that will assist you to quickly start with <a class="reference internal" href="../../frameworks/torch/index.html#pytorch-neuronx-main"><span class="std std-ref">PyTorch Neuron</span></a> for both Inference and Training.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Below instructions are for Ubuntu20, if you looking for complete setup instructions for different platforms, please <a class="reference internal" href="../setup/index.html#setup-guide-index"><span class="std std-ref">Check Here.</span></a></p>
</div>
<details class="sd-sphinx-override sd-dropdown sd-card sd-mb-3 sd-fade-in">
<summary class="sd-summary-title sd-card-header sphinx-design-class-title-small">
Launch the Instance<div class="sd-summary-down docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-down" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M5.22 8.72a.75.75 0 000 1.06l6.25 6.25a.75.75 0 001.06 0l6.25-6.25a.75.75 0 00-1.06-1.06L12 14.44 6.28 8.72a.75.75 0 00-1.06 0z"></path></svg></div>
<div class="sd-summary-up docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-up" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M18.78 15.28a.75.75 0 000-1.06l-6.25-6.25a.75.75 0 00-1.06 0l-6.25 6.25a.75.75 0 101.06 1.06L12 9.56l5.72 5.72a.75.75 0 001.06 0z"></path></svg></div>
</summary><div class="sd-summary-content sd-card-body sphinx-design-class-body-small docutils">
<ul class="simple">
<li><p class="sd-card-text">Please follow the instructions at <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-launch-instance">launch an Amazon EC2 Instance</a> to Launch an instance, when choosing the instance type at the EC2 console. Please make sure to select the correct instance type.</p></li>
<li><p class="sd-card-text">To get more information about instances sizes and pricing see: <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/trn1/">Trn1 web page</a>, <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/inf2/">Inf2 web page</a>, <a class="reference external" href="https://aws.amazon.com/ec2/instance-types/inf1/">Inf1 web page</a></p></li>
<li><p class="sd-card-text">Select your Amazon Machine Image (AMI) of choice, please note that Neuron supports Amazon Linux 2 AMI(HVM) - Kernel 5.10.</p></li>
<li><p class="sd-card-text">When launching a Trn1, please adjust your primary EBS volume size to a minimum of 512GB.</p></li>
<li><p class="sd-card-text">After launching the instance, follow the instructions in <a class="reference external" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html#ec2-connect-to-instance-linux">Connect to your instance</a> to connect to the instance</p></li>
</ul>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p class="sd-card-text">If you are facing a connectivity issue during the model loading process on a Trn1 instance with Ubuntu, that could probably be because of Ubuntu limitations with multiple interfaces. To solve this problem, please follow the steps mentioned <a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html#trn1-ubuntu-troubleshooting"><span class="std std-ref">here</span></a>.</p>
<p class="sd-card-text">Users are highly encouraged to use DLAMI to launch the instances, since DLAMIs come with the required fix.</p>
</div>
</div>
</details><details class="sd-sphinx-override sd-dropdown sd-card sd-mb-3 sd-fade-in">
<summary class="sd-summary-title sd-card-header sphinx-design-class-title-small">
Install Drivers and Tools<div class="sd-summary-down docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-down" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M5.22 8.72a.75.75 0 000 1.06l6.25 6.25a.75.75 0 001.06 0l6.25-6.25a.75.75 0 00-1.06-1.06L12 14.44 6.28 8.72a.75.75 0 00-1.06 0z"></path></svg></div>
<div class="sd-summary-up docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-up" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M18.78 15.28a.75.75 0 000-1.06l-6.25-6.25a.75.75 0 00-1.06 0l-6.25 6.25a.75.75 0 101.06 1.06L12 9.56l5.72 5.72a.75.75 0 001.06 0z"></path></svg></div>
</summary><div class="sd-summary-content sd-card-body sphinx-design-class-body-small docutils">
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Configure Linux for Neuron repository updates
. /etc/os-release
sudo tee /etc/apt/sources.list.d/neuron.list > /dev/null <<EOF
deb https://apt.repos.neuron.amazonaws.com ${VERSION_CODENAME} main
EOF
wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add -
# Update OS packages
sudo apt-get update -y
# Install OS headers
sudo apt-get install linux-headers-$(uname -r) -y
# Install git
sudo apt-get install git -y
# install Neuron Driver
sudo apt-get install aws-neuronx-dkms=2.* -y
# Install Neuron Runtime
sudo apt-get install aws-neuronx-collectives=2.* -y
sudo apt-get install aws-neuronx-runtime-lib=2.* -y
# Install Neuron Tools
sudo apt-get install aws-neuronx-tools=2.* -y
# Add PATH
export PATH=/opt/aws/neuron/bin:$PATH
</pre></div>
</div>
</div>
</details><div class="sd-tab-set docutils">
<input checked="checked" id="sd-tab-item-0" name="sd-tab-set-0" type="radio">
<label class="sd-tab-label" for="sd-tab-item-0">
torch-neuronx (<code class="docutils literal notranslate"><span class="pre">Trn1,</span> <span class="pre">Inf2</span></code>)</label><div class="sd-tab-content docutils">
</div>
<input id="sd-tab-item-1" name="sd-tab-set-0" type="radio">
<label class="sd-tab-label" for="sd-tab-item-1">
torch-neuron (<code class="docutils literal notranslate"><span class="pre">Inf1</span></code>)</label><div class="sd-tab-content docutils">
<details class="sd-sphinx-override sd-dropdown sd-card sd-mb-3 sd-fade-in" id="neuron-installation">
<summary class="sd-summary-title sd-card-header sphinx-design-class-title-small">
Install PyTorch Neuron (<code class="docutils literal notranslate"><span class="pre">torch-neuron</span></code>)<div class="sd-summary-down docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-down" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M5.22 8.72a.75.75 0 000 1.06l6.25 6.25a.75.75 0 001.06 0l6.25-6.25a.75.75 0 00-1.06-1.06L12 14.44 6.28 8.72a.75.75 0 00-1.06 0z"></path></svg></div>
<div class="sd-summary-up docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-up" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M18.78 15.28a.75.75 0 000-1.06l-6.25-6.25a.75.75 0 00-1.06 0l-6.25 6.25a.75.75 0 101.06 1.06L12 9.56l5.72 5.72a.75.75 0 001.06 0z"></path></svg></div>
</summary><div class="sd-summary-content sd-card-body sphinx-design-class-body-small docutils">
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span># Install Python venv
sudo apt-get install -y python3.8-venv g++
# Create Python venv
python3.8 -m venv aws_neuron_venv_pytorch_inf1
# Activate Python venv
source aws_neuron_venv_pytorch_inf1/bin/activate
python -m pip install -U pip
# Install Jupyter notebook kernel
pip install ipykernel
python3.8 -m ipykernel install --user --name aws_neuron_venv_pytorch_inf1 --display-name "Python (torch-neuron)"
pip install jupyter notebook
pip install environment_kernels
# Set pip repository pointing to the Neuron repository
python -m pip config set global.extra-index-url https://pip.repos.neuron.amazonaws.com
# Install PyTorch Neuron
python -m pip install torch-neuron neuron-cc[tensorflow] "protobuf" torchvision
</pre></div>
</div>
</div>
</details><details class="sd-sphinx-override sd-dropdown sd-card sd-mb-3 sd-fade-in">
<summary class="sd-summary-title sd-card-header sphinx-design-class-title-small">
Run Tutorial<div class="sd-summary-down docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-down" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M5.22 8.72a.75.75 0 000 1.06l6.25 6.25a.75.75 0 001.06 0l6.25-6.25a.75.75 0 00-1.06-1.06L12 14.44 6.28 8.72a.75.75 0 00-1.06 0z"></path></svg></div>
<div class="sd-summary-up docutils">
<svg version="1.1" width="1.5em" height="1.5em" class="sd-octicon sd-octicon-chevron-up" viewBox="0 0 24 24" aria-hidden="true"><path fill-rule="evenodd" d="M18.78 15.28a.75.75 0 000-1.06l-6.25-6.25a.75.75 0 00-1.06 0l-6.25 6.25a.75.75 0 101.06 1.06L12 9.56l5.72 5.72a.75.75 0 001.06 0z"></path></svg></div>
</summary><div class="sd-summary-content sd-card-body sphinx-design-class-body-small docutils">
<ul class="simple">
<li><p class="sd-card-text">Torchvision ResNet50 tutorial <a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.ipynb">[notebook]</a></p></li>
</ul>
</div>
</details><div class="sd-card sd-sphinx-override sd-mb-3 sd-shadow-sm sd-card-hover docutils">
<div class="sd-card-body sphinx-design-class-body-small docutils">
<div class="sd-card-title sd-font-weight-bold docutils">
Visit PyTorch Neuron section for more</div>
</div>
<a class="sd-stretched-link reference internal" href="../../frameworks/torch/index.html#pytorch-neuronx-main"><span class="std std-ref"></span></a></div>
</div>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:57.790Z
|
Neuron Apache MXNet (Incubating) Tutorials — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/frameworks/mxnet-neuron/tutorials/index.html#mxnet-tutorials
|
# Neuron Apache MXNet (Incubating) Tutorials — AWS Neuron Documentation
Toggle in-page Table of Contents
Contents
- [Before running a tutorial](#before-running-a-tutorial)
- [Computer Vision](#computer-vision)
- [Natural Language Processing](#natural-language-processing)
- [Utilizing Neuron Capabilities](#utilizing-neuron-capabilities)
## Contents
- [Before running a tutorial](#before-running-a-tutorial)
- [Computer Vision](#computer-vision)
- [Natural Language Processing](#natural-language-processing)
- [Utilizing Neuron Capabilities](#utilizing-neuron-capabilities)
_This document is relevant for_: `Inf1`
## Neuron Apache MXNet (Incubating) Tutorials[#](#neuron-apache-mxnet-incubating-tutorials "Permalink to this headline")
## Before running a tutorial[#](#before-running-a-tutorial "Permalink to this headline")
You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.
Follow instructions at [MXNet Tutorial Setup](mxnet-tutorial-setup.html#mxnet-tutorial-setup) before running an MXNet tutorial on Inferentia.
## Natural Language Processing[#](#natural-language-processing "Permalink to this headline")
- MXNet 1.8: Using data parallel mode tutorial [\[html\]](../../../src/examples/mxnet/data_parallel/data_parallel_tutorial.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb)
## Utilizing Neuron Capabilities[#](#utilizing-neuron-capabilities "Permalink to this headline")
- NeuronCore Groups tutorial [\[html\]](../../../src/examples/mxnet/resnet50_neuroncore_groups.html) [\[notebook\]](https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/mxnet/resnet50_neuroncore_groups.ipynb)
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Neuron Apache MXNet (Incubating) Tutorials — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../_static/pygments.css">
<link rel="stylesheet" href="../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../" id="documentation_options" src="../../../_static/documentation_options.js"></script>
<script src="../../../_static/jquery.js"></script>
<script src="../../../_static/underscore.js"></script>
<script src="../../../_static/doctools.js"></script>
<script src="../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../_static/contentui.js"></script>
<script src="../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../genindex.html">
<link rel="search" title="Search" href="../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "frameworks/mxnet-neuron/tutorials/index", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Fframeworks/mxnet-neuron/tutorials/index.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/frameworks/mxnet-neuron/tutorials/index.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../_sources/frameworks/mxnet-neuron/tutorials/index.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#before-running-a-tutorial">
Before running a tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#computer-vision">
Computer Vision
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#natural-language-processing">
Natural Language Processing
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#utilizing-neuron-capabilities">
Utilizing Neuron Capabilities
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Neuron Apache MXNet (Incubating) Tutorials</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#before-running-a-tutorial">
Before running a tutorial
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#computer-vision">
Computer Vision
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#natural-language-processing">
Natural Language Processing
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#utilizing-neuron-capabilities">
Utilizing Neuron Capabilities
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="neuron-apache-mxnet-incubating-tutorials">
<span id="mxnet-tutorials"></span><h1>Neuron Apache MXNet (Incubating) Tutorials<a class="headerlink" href="#neuron-apache-mxnet-incubating-tutorials" title="Permalink to this headline">#</a></h1>
<div class="section" id="before-running-a-tutorial">
<h2>Before running a tutorial<a class="headerlink" href="#before-running-a-tutorial" title="Permalink to this headline">#</a></h2>
<p>You will run the tutorials on an inf1.6xlarge instance running Deep Learning AMI (DLAMI) to enable both compilation and deployment (inference) on the same instance. In a production environment we encourage you to try different instance sizes to optimize to your specific deployment needs.</p>
<p>Follow instructions at <a class="reference internal" href="mxnet-tutorial-setup.html#mxnet-tutorial-setup"><span class="std std-ref">MXNet Tutorial Setup</span></a> before running an MXNet tutorial on Inferentia.</p>
<div class="toctree-wrapper compound">
</div>
</div>
<div class="section" id="computer-vision">
<span id="mxnet-computervision"></span><h2>Computer Vision<a class="headerlink" href="#computer-vision" title="Permalink to this headline">#</a></h2>
<ul class="simple">
<li><p>ResNet-50 tutorial <a class="reference internal" href="../../../src/examples/mxnet/resnet50/resnet50.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/mxnet/resnet50/resnet50.ipynb">[notebook]</a></p></li>
<li><p>Model Serving tutorial <a class="reference internal" href="tutorial-model-serving.html#mxnet-neuron-model-serving"><span class="std std-ref">[html]</span></a></p></li>
<li><p>Getting started with Gluon tutorial <a class="reference internal" href="../../../src/examples/mxnet/mxnet-gluon-tutorial.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/mxnet/mxnet-gluon-tutorial.ipynb">[notebook]</a></p></li>
</ul>
<div class="toctree-wrapper compound">
</div>
</div>
<div class="section" id="natural-language-processing">
<span id="mxnet-nlp"></span><h2>Natural Language Processing<a class="headerlink" href="#natural-language-processing" title="Permalink to this headline">#</a></h2>
<ul class="simple">
<li><p>MXNet 1.8: Using data parallel mode tutorial <a class="reference internal" href="../../../src/examples/mxnet/data_parallel/data_parallel_tutorial.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/mxnet/data_parallel/data_parallel_tutorial.ipynb">[notebook]</a></p></li>
</ul>
<div class="toctree-wrapper compound">
</div>
</div>
<div class="section" id="utilizing-neuron-capabilities">
<span id="mxnet-utilize-neuron"></span><h2>Utilizing Neuron Capabilities<a class="headerlink" href="#utilizing-neuron-capabilities" title="Permalink to this headline">#</a></h2>
<ul class="simple">
<li><p>NeuronCore Groups tutorial <a class="reference internal" href="../../../src/examples/mxnet/resnet50_neuroncore_groups.html"><span class="std std-ref">[html]</span></a> <a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sdk/blob/v2.14.1/src/examples/mxnet/resnet50_neuroncore_groups.ipynb">[notebook]</a></p></li>
</ul>
<div class="toctree-wrapper compound">
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:57.985Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuronx-cc/misc-neuronx-cc.rst.txt
|
```
Misc (neuronx-cc)
=================
.. toctree::
:maxdepth: 1
FAQ </compiler/neuronx-cc/faq>
What's New </release-notes/compiler/neuronx-cc/index>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (neuronx-cc)
=================
.. toctree::
:maxdepth: 1
FAQ </compiler/neuronx-cc/faq>
What's New </release-notes/compiler/neuronx-cc/index>
</pre></body></html>
|
2023-09-29T20:54:57.992Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/index.rst.txt
|
```
.. _neuron_cc:
Neuron Compiler
===============
The Neuron Compiler accepts Machine Learning models in various formats (TensorFlow, MXNet, PyTorch, XLA HLO) and optimizes them to run on Neuron devices.
The Neuron compiler is invoked within the ML framework, where ML models are sent to
the compiler by the Neuron Framework plugin. The resulting compiler artifact is called
a NEFF file (Neuron Executable File Format) that in turn is loaded by the Neuron runtime to the Neuron device.
.. toctree::
:maxdepth: 1
:hidden:
/compiler/neuronx-cc
.. toctree::
:maxdepth: 1
:hidden:
/compiler/neuron-cc
.. tab-set::
.. tab-item:: Neuron Compiler for Trn1 & Inf2
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`Neuron Compiler CLI Reference Guide <neuron-compiler-cli-reference-guide>`
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuronx-cc-training-mixed-precision`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`FAQ <neuronx_compiler_faq>`
* :ref:`What's New <neuronx-cc-rn>`
.. tab-item:: Neuron Compiler for Inf1
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron-compiler-cli-reference`
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron-cc-training-mixed-precision`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`FAQ <neuron_compiler_faq>`
* :ref:`What's New <neuron-cc-rn>`
* :ref:`neuron-supported-operators`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_cc:
Neuron Compiler
===============
The Neuron Compiler accepts Machine Learning models in various formats (TensorFlow, MXNet, PyTorch, XLA HLO) and optimizes them to run on Neuron devices.
The Neuron compiler is invoked within the ML framework, where ML models are sent to
the compiler by the Neuron Framework plugin. The resulting compiler artifact is called
a NEFF file (Neuron Executable File Format) that in turn is loaded by the Neuron runtime to the Neuron device.
.. toctree::
:maxdepth: 1
:hidden:
/compiler/neuronx-cc
.. toctree::
:maxdepth: 1
:hidden:
/compiler/neuron-cc
.. tab-set::
.. tab-item:: Neuron Compiler for Trn1 & Inf2
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`Neuron Compiler CLI Reference Guide <neuron-compiler-cli-reference-guide>`
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuronx-cc-training-mixed-precision`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`FAQ <neuronx_compiler_faq>`
* :ref:`What's New <neuronx-cc-rn>`
.. tab-item:: Neuron Compiler for Inf1
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron-compiler-cli-reference`
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron-cc-training-mixed-precision`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`FAQ <neuron_compiler_faq>`
* :ref:`What's New <neuron-cc-rn>`
* :ref:`neuron-supported-operators`
</pre></body></html>
|
2023-09-29T20:54:58.002Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuronx-cc/faq.rst.txt
|
```
.. _neuronx_compiler_faq:
Neuron Compiler FAQ (``neuronx-cc``)
====================================
.. contents:: Table of contents
:local:
:depth: 1
Where can I compile to Neuron?
---------------------------------
The one-time compilation step from the standard framework-level model to
NEFF binary may be performed on any EC2 instance or even
on-premises.
We recommend using a high-performance compute server of choice (C5 or
z1d instance types), for the fastest compile times and ease of use with
a prebuilt `DLAMI <https://aws.amazon.com/machine-learning/amis/>`__.
Developers can also install Neuron in their own environments; this
approach may work well for example when building a large fleet for
inference, allowing the model creation, training and compilation to be
done in the training fleet, with the NEFF files being distributed by a
configuration management application to the inference fleet.
.. _neuron-vs-neuronx:
What is the difference between ``neuron-cc`` and ``neuronx-cc``?
----------------------------------------------------------------
* ``neuron-cc`` is the Neuron Compiler with TVM front-end, ``neuron-cc`` supports only :ref:`neuroncores-v1-arch`.
* ``neuronx-cc`` is the Neuron Compiler with XLA fron-end, ``neuronx-cc`` currently supports
:ref:`neuroncores-v2-arch`, ``neuronx-cc`` support of :ref:`neuroncores-v1-arch` is currently a
:ref:`Roadmap Item <neuron_roadmap>`.
Should I use ``neuron-cc`` or ``neuronx-cc``?
---------------------------------------------
See :ref:`neuron-vs-neuronx`
My current neural network is based on FP32, how can I use it with Neuron?
-------------------------------------------------------------------------
Developers who want to train their models in FP32 for best accuracy can
compile and deploy them with Neuron. The Neuron compiler automatically converts
FP32 to internally supported datatypes, such as FP16 or BF16.
You can find more details about FP32 data type support
and performance and accuracy tuning
in :ref:`neuronx-cc-training-mixed-precision` or :ref:`neuron-cc-training-mixed-precision`.
The Neuron compiler preserves the application interface - FP32 inputs and outputs.
Transferring such large tensors may become a bottleneck for your application.
Therefore, you can improve execution time by casting the inputs and outputs to
FP16 or BF16 in the ML framework prior to compilation.
Which operators does Neuron support?
---------------------------------------
You can use the ``neuronx-cc list-operators`` command on the cli to list the operators. See :ref:`neuron-compiler-cli-reference-guide`.
To request support for new operators, open an issue on our `GitHub forum <https://github.com/aws/aws-neuron-sdk/issues/new>`_.
Any operators that Neuron Compiler doesn't support?
---------------------------------------------------
Models with control-flow and dynamic shapes are not supported now. You will
need to partition the model using the framework prior to compilation.
.. note::
Starting with :ref:`neuroncores-v2-arch` Neuron supports control-flow and dynamic shapes.
Stay tuned and follow the :ref:`Neuron Roadmap <neuron_roadmap>`.
Will I need to recompile again if I updated runtime/driver version?
----------------------------------------------------------------------
The compiler and runtime are committed to maintaining compatibility for
major version releases with each other. The versioning is defined as
major.minor, with compatibility for all versions with the same major
number. If the versions mismatch, an error notification is logged and
the load will fail. This will then require the model to be recompiled.
I have a NEFF binary, how can I tell which compiler version generated it?
-------------------------------------------------------------------------
** We will bring a utility out to help with this soon.
How long does it take to compile?
------------------------------------
It depends on the model and its size and complexity, but this generally
takes a few minutes.
Why is my model producing different results compared to CPU/GPU?
----------------------------------------------------------------
:ref:`neuroncores-v2-arch` supports multiple casting modes for floating point numbers, each with
associated implications for performance and accuracy. The default casting mode
is a pragmatic balance between performance and accuracy, however on some models
it may result in loss of precision.
See the :option:`--auto-cast` and :option:`--auto-cast-type` options in :ref:`neuron-compiler-cli-reference-guide` for details on how to adjust the casting mode.
Do you support model *<insert model type>*?
-------------------------------------------
``neuronx-cc`` has explicit support for select model families using the :option:`--model-type` option, though many other model types are supported. You can also inspect supported operators using the :option:`list-operators` sub-command. See th :ref:`neuron-compiler-cli-reference-guide` for details.
More generally, support for new operators and models is continually being added. See our :ref:`neuron_roadmap` for details.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx_compiler_faq:
Neuron Compiler FAQ (``neuronx-cc``)
====================================
.. contents:: Table of contents
:local:
:depth: 1
Where can I compile to Neuron?
---------------------------------
The one-time compilation step from the standard framework-level model to
NEFF binary may be performed on any EC2 instance or even
on-premises.
We recommend using a high-performance compute server of choice (C5 or
z1d instance types), for the fastest compile times and ease of use with
a prebuilt `DLAMI <https://aws.amazon.com/machine-learning/amis/>`__.
Developers can also install Neuron in their own environments; this
approach may work well for example when building a large fleet for
inference, allowing the model creation, training and compilation to be
done in the training fleet, with the NEFF files being distributed by a
configuration management application to the inference fleet.
.. _neuron-vs-neuronx:
What is the difference between ``neuron-cc`` and ``neuronx-cc``?
----------------------------------------------------------------
* ``neuron-cc`` is the Neuron Compiler with TVM front-end, ``neuron-cc`` supports only :ref:`neuroncores-v1-arch`.
* ``neuronx-cc`` is the Neuron Compiler with XLA fron-end, ``neuronx-cc`` currently supports
:ref:`neuroncores-v2-arch`, ``neuronx-cc`` support of :ref:`neuroncores-v1-arch` is currently a
:ref:`Roadmap Item <neuron_roadmap>`.
Should I use ``neuron-cc`` or ``neuronx-cc``?
---------------------------------------------
See :ref:`neuron-vs-neuronx`
My current neural network is based on FP32, how can I use it with Neuron?
-------------------------------------------------------------------------
Developers who want to train their models in FP32 for best accuracy can
compile and deploy them with Neuron. The Neuron compiler automatically converts
FP32 to internally supported datatypes, such as FP16 or BF16.
You can find more details about FP32 data type support
and performance and accuracy tuning
in :ref:`neuronx-cc-training-mixed-precision` or :ref:`neuron-cc-training-mixed-precision`.
The Neuron compiler preserves the application interface - FP32 inputs and outputs.
Transferring such large tensors may become a bottleneck for your application.
Therefore, you can improve execution time by casting the inputs and outputs to
FP16 or BF16 in the ML framework prior to compilation.
Which operators does Neuron support?
---------------------------------------
You can use the ``neuronx-cc list-operators`` command on the cli to list the operators. See :ref:`neuron-compiler-cli-reference-guide`.
To request support for new operators, open an issue on our `GitHub forum <https://github.com/aws/aws-neuron-sdk/issues/new>`_.
Any operators that Neuron Compiler doesn't support?
---------------------------------------------------
Models with control-flow and dynamic shapes are not supported now. You will
need to partition the model using the framework prior to compilation.
.. note::
Starting with :ref:`neuroncores-v2-arch` Neuron supports control-flow and dynamic shapes.
Stay tuned and follow the :ref:`Neuron Roadmap <neuron_roadmap>`.
Will I need to recompile again if I updated runtime/driver version?
----------------------------------------------------------------------
The compiler and runtime are committed to maintaining compatibility for
major version releases with each other. The versioning is defined as
major.minor, with compatibility for all versions with the same major
number. If the versions mismatch, an error notification is logged and
the load will fail. This will then require the model to be recompiled.
I have a NEFF binary, how can I tell which compiler version generated it?
-------------------------------------------------------------------------
** We will bring a utility out to help with this soon.
How long does it take to compile?
------------------------------------
It depends on the model and its size and complexity, but this generally
takes a few minutes.
Why is my model producing different results compared to CPU/GPU?
----------------------------------------------------------------
:ref:`neuroncores-v2-arch` supports multiple casting modes for floating point numbers, each with
associated implications for performance and accuracy. The default casting mode
is a pragmatic balance between performance and accuracy, however on some models
it may result in loss of precision.
See the :option:`--auto-cast` and :option:`--auto-cast-type` options in :ref:`neuron-compiler-cli-reference-guide` for details on how to adjust the casting mode.
Do you support model *<insert model type>*?
-------------------------------------------
``neuronx-cc`` has explicit support for select model families using the :option:`--model-type` option, though many other model types are supported. You can also inspect supported operators using the :option:`list-operators` sub-command. See th :ref:`neuron-compiler-cli-reference-guide` for details.
More generally, support for new operators and models is continually being added. See our :ref:`neuron_roadmap` for details.
</pre></body></html>
|
2023-09-29T20:54:58.082Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/runtime/aws-neuronx-dkms/index.rst.txt
|
```
.. _neuron-driver-release-notes:
Neuron Driver Release Notes
===========================
.. contents:: Table of contents
:local:
:depth: 1
Known issues
------------
Updated : 04/29/2022
- In rare cases of multi-process applications running under heavy stress a model load failure my occur. This may require reloading of the Neuron Driver as a workaround.
Neuron Driver release [2.13.4.0]
--------------------------------
Date: 9/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added sysfs support for showing connected devices on trn1.32xl, inf2.24xl, and inf2.48xl instances.
Neuron Driver release [2.12.18.0]
--------------------------------
Date: 9/01/2023
Bug Fixes
^^^^^^^^^
* Added fixes required by Neuron K8 components for improving reliability of pod failures (see :ref:`Neuron K8 release notes <neuron-k8-rn>` for more details).
* Added fixes required by Neuron K8 components to support zero-based indexing of Neuron Devices in Kubernetes deployments.
Neuron Driver release [2.12.11.0]
--------------------------------
Date: 8/28/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added FLOP count to sysfs (flop_count)
* Added connected Neuron Device ids to sysfs (connected_devices)
* Added async DMA copy support
* Suppressed benign timeout/retry messages
Bug Fixes
^^^^^^^^^
* Allocated CC-Core to correct NeuronCore; splitting CC-Cores evenly between NeuronCores.
Neuron Driver release [2.11.9.0]
--------------------------------
Date: 7/19/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for creating batch DMA queues.
Bug Fixes
^^^^^^^^^
* Error message, "ncdev is not NULL", was being printed unnecessarily. Fixed.
* Fix DMA timeouts during NeuronCore reset of neighboring core caused by incorrect nc_id (NeuronCore ID) assigned to reserved memory
Neuron Driver release [2.10.11.0]
--------------------------------
Date: 6/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added memory usage breakdown by category to the Neuron Sysfs nodes. New categories are code, misc, tensors, constants, and scratchpad. Please see the Sysfs page under Neuron Tools for more detailed description of each.
* Improved NeuronCore initialization (nrt_init) performance by approximately 1 second.
Bug Fixes
^^^^^^^^^
* Fixed small timing window during NeuronCore resets, which previously would timeout during memcpy
* Removed potential double free of memory when terminating the Neuron Driver.
* Fixed sysfs race condition, which was leading to Neuron Driver crash during termination.
Neuron Driver release [2.9.4.0]
--------------------------------
Date: 05/01/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added dma_buf support, which is needed for future EFA implementations in the Linux kernel.
* Added new IOCTL to get Neuron Device BDF (used by Neuron Runtime)
* Added optional support for sysfs notify (off by default). See Neuron Sysfs documentation (under Neuron System Tools) for more details.
Bug Fixes
^^^^^^^^^
* Fixed max DMA queue size constant to be the correct size - previous incorrect sizing had potential to lead to DMA aborts (execution timeout).
Neuron Driver release [2.8.4.0]
--------------------------------
Date: 03/28/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Supports both Trn1n and Inf2 instance types.
* Renamed NEURON_ARCH_INFERENTIA=>NEURON_ARCH_V1 and NEURON_ARCH_TRN=>NEURON_ARCH_V2
* Under sysfs nodes, the following changes were made:
* Changed “infer” metrics to “execute” metrics
* Added peak memory usage metric
* Removed empty dynamic metrics directory
* Removed refresh rate metric
* Fixed arch type names in sysfs
Bug Fixes
^^^^^^^^^
* Fixed minor memory leak when closing the Neuron Runtime.
* Fixed memory leaks on error paths in Neuron Driver.
* Added a workaround to resolve hangs when NeuronCore reset is ran while another core is performing DMA operations.
Neuron Driver release [2.7.33.0]
--------------------------------
Date: 02/24/2023
Bug Fixes
^^^^^^^^^
* Added a retry mechanism to mitigate possible data copy failures during reset of a NeuronCore. An info log message will be emitted when this occurs indicating that the retry was attempted. An example::
kernel: [726415.485022] neuron:ndma_memcpy_wait_for_completion: DMA completion timeout for UDMA_ENG_33 q0
kernel: [726415.491744] neuron:ndma_memcpy_offset_move: Failed to copy memory during a NeuronCore reset: nd 0, src 0x100154480000, dst 0x100154500000, size 523264. Retrying the copy.
::
Neuron Driver release [2.7.15.0]
--------------------------------
Date: 02/08/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added Neuron sysfs metrics under ``/sys/devices/virtual/neuron_device/neuron{0,1, ...}/metrics/``
Neuron Driver release [2.6.26.0]
--------------------------------
Date: 11/07/2022
New in this release
^^^^^^^^^^^^^^^^^^^
* Minor bug fixes and improvements.
Neuron Driver release [2.5.38.0]
--------------------------------
Neuron Driver now supports INF1 and TRN1 EC2 instance types. Name of the driver package changed from aws-neuron-dkms to aws-neuronx-dkms. Please remove the older driver package before installing the newest one.
Date: 10/10/2022
New in this release
^^^^^^^^^^^^^^^^^^^
* Support added for EC2 Trn1 instance types and ML training workloads.
* Added missing GPL2 LICENSE file.
* Changed package name to aws-neuronx-dkms (was previously minus the 'x').
* Security Update -- blocked user space access to control registers and DMA control queues intended to be used by the Neuron Driver only.
* Added support for DMA Aborts to avoid hangs.
* Added support for TPB Reset.
* Added sysfs entries for triggering resets and reading core counts.
* Added write combining on BAR4.
* Added PCI Device ID update as part of install.
* Added handling for known duplicate device id error.
Bug Fixes
^^^^^^^^^
* Fixed a null pointer free scenario.
* Fixed installation issue related to install without internet connectivity.
Neuron Driver release [2.3.26.0]
--------------------------------
Date: 08/02/2022
Bug Fixes
^^^^^^^^^
- Security Update: Blocked user space access to control registers and DMA control queues intended to be used by the Neuron Driver only. Recommending upgrade to all customers.
Neuron Driver release [2.3.11.0]
--------------------------------
Date: 05/27/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- This driver is required to support future releases of the Neuron Runtime. Included in the release is both a bug fix to avoid a kernel crash scenario and an increased compatibility range to ensure compatibility with future versions of Neuron Runtime.
Bug Fixes
^^^^^^^^^
- Correction to huge aligned memory allocation/freeing logic that was previously susceptible to crashes in the kernel. The crash would bring down the OS. Recommending upgrade to all customers.
Neuron Driver release [2.3.3.0]
--------------------------------
Date: 04/29/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- Minor performance improvements on inference and loading of models.
Bug Fixes
^^^^^^^^^
- Reduced Host CPU usage when reading ``hw_counters`` metric from neuron-monitor
- Minor bug fixes.
Neuron Driver release [2.2.14.0]
--------------------------------
Date: 03/25/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- Minor updates
Neuron Driver release [2.2.13.0]
--------------------------------
Date: 01/20/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- Minor updates
Neuron Driver release [2.2.6.0]
-------------------------------
Date: 10/27/2021
New in this release
^^^^^^^^^^^^^^^^^^^
- Memory improvements made to ensure all allocations are made with 4K
alignments.
Resolved issues
^^^^^^^^^^^^^^^
- No longer delays 1s per NeuronDevice when closing Neuron Tools
applications.
- Fixes a Ubuntu 20 build issue
Neuron Driver release [2.1]
---------------------------
- Support is added for Neuron Runtime 2.x (``libnrt.so``).
- Support for previous releases of Neuron Runtime 1.x is continued with
Driver 2.x releases.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-driver-release-notes:
Neuron Driver Release Notes
===========================
.. contents:: Table of contents
:local:
:depth: 1
Known issues
------------
Updated : 04/29/2022
- In rare cases of multi-process applications running under heavy stress a model load failure my occur. This may require reloading of the Neuron Driver as a workaround.
Neuron Driver release [2.13.4.0]
--------------------------------
Date: 9/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added sysfs support for showing connected devices on trn1.32xl, inf2.24xl, and inf2.48xl instances.
Neuron Driver release [2.12.18.0]
--------------------------------
Date: 9/01/2023
Bug Fixes
^^^^^^^^^
* Added fixes required by Neuron K8 components for improving reliability of pod failures (see :ref:`Neuron K8 release notes <neuron-k8-rn>` for more details).
* Added fixes required by Neuron K8 components to support zero-based indexing of Neuron Devices in Kubernetes deployments.
Neuron Driver release [2.12.11.0]
--------------------------------
Date: 8/28/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added FLOP count to sysfs (flop_count)
* Added connected Neuron Device ids to sysfs (connected_devices)
* Added async DMA copy support
* Suppressed benign timeout/retry messages
Bug Fixes
^^^^^^^^^
* Allocated CC-Core to correct NeuronCore; splitting CC-Cores evenly between NeuronCores.
Neuron Driver release [2.11.9.0]
--------------------------------
Date: 7/19/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added support for creating batch DMA queues.
Bug Fixes
^^^^^^^^^
* Error message, "ncdev is not NULL", was being printed unnecessarily. Fixed.
* Fix DMA timeouts during NeuronCore reset of neighboring core caused by incorrect nc_id (NeuronCore ID) assigned to reserved memory
Neuron Driver release [2.10.11.0]
--------------------------------
Date: 6/14/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added memory usage breakdown by category to the Neuron Sysfs nodes. New categories are code, misc, tensors, constants, and scratchpad. Please see the Sysfs page under Neuron Tools for more detailed description of each.
* Improved NeuronCore initialization (nrt_init) performance by approximately 1 second.
Bug Fixes
^^^^^^^^^
* Fixed small timing window during NeuronCore resets, which previously would timeout during memcpy
* Removed potential double free of memory when terminating the Neuron Driver.
* Fixed sysfs race condition, which was leading to Neuron Driver crash during termination.
Neuron Driver release [2.9.4.0]
--------------------------------
Date: 05/01/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added dma_buf support, which is needed for future EFA implementations in the Linux kernel.
* Added new IOCTL to get Neuron Device BDF (used by Neuron Runtime)
* Added optional support for sysfs notify (off by default). See Neuron Sysfs documentation (under Neuron System Tools) for more details.
Bug Fixes
^^^^^^^^^
* Fixed max DMA queue size constant to be the correct size - previous incorrect sizing had potential to lead to DMA aborts (execution timeout).
Neuron Driver release [2.8.4.0]
--------------------------------
Date: 03/28/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Supports both Trn1n and Inf2 instance types.
* Renamed NEURON_ARCH_INFERENTIA=>NEURON_ARCH_V1 and NEURON_ARCH_TRN=>NEURON_ARCH_V2
* Under sysfs nodes, the following changes were made:
* Changed “infer” metrics to “execute” metrics
* Added peak memory usage metric
* Removed empty dynamic metrics directory
* Removed refresh rate metric
* Fixed arch type names in sysfs
Bug Fixes
^^^^^^^^^
* Fixed minor memory leak when closing the Neuron Runtime.
* Fixed memory leaks on error paths in Neuron Driver.
* Added a workaround to resolve hangs when NeuronCore reset is ran while another core is performing DMA operations.
Neuron Driver release [2.7.33.0]
--------------------------------
Date: 02/24/2023
Bug Fixes
^^^^^^^^^
* Added a retry mechanism to mitigate possible data copy failures during reset of a NeuronCore. An info log message will be emitted when this occurs indicating that the retry was attempted. An example::
kernel: [726415.485022] neuron:ndma_memcpy_wait_for_completion: DMA completion timeout for UDMA_ENG_33 q0
kernel: [726415.491744] neuron:ndma_memcpy_offset_move: Failed to copy memory during a NeuronCore reset: nd 0, src 0x100154480000, dst 0x100154500000, size 523264. Retrying the copy.
::
Neuron Driver release [2.7.15.0]
--------------------------------
Date: 02/08/2023
New in this release
^^^^^^^^^^^^^^^^^^^
* Added Neuron sysfs metrics under ``/sys/devices/virtual/neuron_device/neuron{0,1, ...}/metrics/``
Neuron Driver release [2.6.26.0]
--------------------------------
Date: 11/07/2022
New in this release
^^^^^^^^^^^^^^^^^^^
* Minor bug fixes and improvements.
Neuron Driver release [2.5.38.0]
--------------------------------
Neuron Driver now supports INF1 and TRN1 EC2 instance types. Name of the driver package changed from aws-neuron-dkms to aws-neuronx-dkms. Please remove the older driver package before installing the newest one.
Date: 10/10/2022
New in this release
^^^^^^^^^^^^^^^^^^^
* Support added for EC2 Trn1 instance types and ML training workloads.
* Added missing GPL2 LICENSE file.
* Changed package name to aws-neuronx-dkms (was previously minus the 'x').
* Security Update -- blocked user space access to control registers and DMA control queues intended to be used by the Neuron Driver only.
* Added support for DMA Aborts to avoid hangs.
* Added support for TPB Reset.
* Added sysfs entries for triggering resets and reading core counts.
* Added write combining on BAR4.
* Added PCI Device ID update as part of install.
* Added handling for known duplicate device id error.
Bug Fixes
^^^^^^^^^
* Fixed a null pointer free scenario.
* Fixed installation issue related to install without internet connectivity.
Neuron Driver release [2.3.26.0]
--------------------------------
Date: 08/02/2022
Bug Fixes
^^^^^^^^^
- Security Update: Blocked user space access to control registers and DMA control queues intended to be used by the Neuron Driver only. Recommending upgrade to all customers.
Neuron Driver release [2.3.11.0]
--------------------------------
Date: 05/27/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- This driver is required to support future releases of the Neuron Runtime. Included in the release is both a bug fix to avoid a kernel crash scenario and an increased compatibility range to ensure compatibility with future versions of Neuron Runtime.
Bug Fixes
^^^^^^^^^
- Correction to huge aligned memory allocation/freeing logic that was previously susceptible to crashes in the kernel. The crash would bring down the OS. Recommending upgrade to all customers.
Neuron Driver release [2.3.3.0]
--------------------------------
Date: 04/29/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- Minor performance improvements on inference and loading of models.
Bug Fixes
^^^^^^^^^
- Reduced Host CPU usage when reading ``hw_counters`` metric from neuron-monitor
- Minor bug fixes.
Neuron Driver release [2.2.14.0]
--------------------------------
Date: 03/25/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- Minor updates
Neuron Driver release [2.2.13.0]
--------------------------------
Date: 01/20/2022
New in this release
^^^^^^^^^^^^^^^^^^^
- Minor updates
Neuron Driver release [2.2.6.0]
-------------------------------
Date: 10/27/2021
New in this release
^^^^^^^^^^^^^^^^^^^
- Memory improvements made to ensure all allocations are made with 4K
alignments.
Resolved issues
^^^^^^^^^^^^^^^
- No longer delays 1s per NeuronDevice when closing Neuron Tools
applications.
- Fixes a Ubuntu 20 build issue
Neuron Driver release [2.1]
---------------------------
- Support is added for Neuron Runtime 2.x (``libnrt.so``).
- Support for previous releases of Neuron Runtime 1.x is continued with
Driver 2.x releases.
</pre></body></html>
|
2023-09-29T20:54:58.090Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/runtime/aws-neuronx-collectives/index.rst.txt
|
```
.. _neuron-collectives-rn:
Neuron Collectives Release Notes
================================
Neuron Collectives refers to a set of libraries used to support collective compute operations within the Neuron SDK. The collectives support is delivered via the aws-neuronx-collectives package and includes a pre-built version of the OFI plugin required for use of collectives with Elastic Fabric Adapter (EFA).
.. contents:: Table of contents
:local:
:depth: 1
Neuron Collectives [2.17.9.0]
------------------------------
Date: 9/14/2023
New in this release:
* minor bug fixes and enhancements
Neuron Collectives [2.16.16.0]
------------------------------
Date: 9/01/2023
New in this release:
* minor bug fixes and enhancements
Neuron Collectives [2.16.8.0]
------------------------------
Date: 8/28/2023
New in this release:
* Improved error messages for unsupported topologies
* Improved timeout error messages for bootstrapInit
Bug Fixes:
* Fix bug where Linux kernel version check for SAFE_FORK env variable was incorrectly requiring SAFE_FORK to be set on kernel versions greater than 5
Neuron Collectives [2.15.16.0]
------------------------------
Date: 8/09/2023
New in this release:
* minor bug fixes and enhancements
Neuron Collectives [2.15.13.0]
------------------------------
Date: 7/19/2023
New in this release:
* AllReduce with All-to-all communication pattern enabled for 16 ranks on TRN1/TRN1N within the instance (intranode); choice of 16 ranks is limited to NeuronCores 0-15 or 16-31.
Bug Fixes:
* Fix incorrect mask calculation for 16 ranks when using NeuronCores 16-31
* Fix channels for 16 ranks to avoid failures in the runtime; restrict participating ranks to 0-15 or 16-31
Neuron Collectives [2.14.9.0]
------------------------------
Date: 6/14/2023
New in this release
* Added check for FI_EFA_FORK_SAFE environment variable; now forcing the flag to be set to 1 for multinode runs executing on Linux kernels older than 5.15.
Neuron Collectives [2.13.7.0]
------------------------------
Date: 05/01/2023
New in this release
* Added support for dma_buf - required for future EFA and Linux kernel updates.
* Reduced benign reporting of timeouts. Previous implementations reported “Timeout waiting for incoming connection” too frequently (log spam).
Neuron Collectives [2.12.35.0]
------------------------------
Date: 04/19/2023
Bug Fixes
* Fixed support for SOCKET_IFNAME config that was affecting EKS users at scale on large training jobs.
Neuron Collectives [2.12.22.0]
------------------------------
Date: 03/28/2023
New in this release
* Added support for TRN1N.
* Added support for 16 channels and 16 EFA devices, which is required for enabling EC2 TRN1N instances with Neuron.
* Added support for hierarchical All-Reduce and Reduce-Scatter. These implementations are now used by default and provides up to 75% reduction in latency for 2MB buffers across 256 ranks.
Neuron Collectives [2.11.47.0]
------------------------------
Date: 02/08/2023
New in this release
* Added support for Inf2.
Neuron Collectives [2.10.20.0]
-----------------------------
Date: 10/10/2022
New in this release
* Improved logging to appear similar in style to Neuron Runtime
Bug Fixes
* Fixed memory registration to support 2GB+ sizes
* Fixed association of network devices to channels (removes previous hard-coding).
Neuron Collectives [2.9.86.0]
-----------------------------
Date: 10/10/2022
New in this release
* Added support for All-Reduce, Reduce-Scatter, All-Gather, and Send/Recv operations.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-collectives-rn:
Neuron Collectives Release Notes
================================
Neuron Collectives refers to a set of libraries used to support collective compute operations within the Neuron SDK. The collectives support is delivered via the aws-neuronx-collectives package and includes a pre-built version of the OFI plugin required for use of collectives with Elastic Fabric Adapter (EFA).
.. contents:: Table of contents
:local:
:depth: 1
Neuron Collectives [2.17.9.0]
------------------------------
Date: 9/14/2023
New in this release:
* minor bug fixes and enhancements
Neuron Collectives [2.16.16.0]
------------------------------
Date: 9/01/2023
New in this release:
* minor bug fixes and enhancements
Neuron Collectives [2.16.8.0]
------------------------------
Date: 8/28/2023
New in this release:
* Improved error messages for unsupported topologies
* Improved timeout error messages for bootstrapInit
Bug Fixes:
* Fix bug where Linux kernel version check for SAFE_FORK env variable was incorrectly requiring SAFE_FORK to be set on kernel versions greater than 5
Neuron Collectives [2.15.16.0]
------------------------------
Date: 8/09/2023
New in this release:
* minor bug fixes and enhancements
Neuron Collectives [2.15.13.0]
------------------------------
Date: 7/19/2023
New in this release:
* AllReduce with All-to-all communication pattern enabled for 16 ranks on TRN1/TRN1N within the instance (intranode); choice of 16 ranks is limited to NeuronCores 0-15 or 16-31.
Bug Fixes:
* Fix incorrect mask calculation for 16 ranks when using NeuronCores 16-31
* Fix channels for 16 ranks to avoid failures in the runtime; restrict participating ranks to 0-15 or 16-31
Neuron Collectives [2.14.9.0]
------------------------------
Date: 6/14/2023
New in this release
* Added check for FI_EFA_FORK_SAFE environment variable; now forcing the flag to be set to 1 for multinode runs executing on Linux kernels older than 5.15.
Neuron Collectives [2.13.7.0]
------------------------------
Date: 05/01/2023
New in this release
* Added support for dma_buf - required for future EFA and Linux kernel updates.
* Reduced benign reporting of timeouts. Previous implementations reported “Timeout waiting for incoming connection” too frequently (log spam).
Neuron Collectives [2.12.35.0]
------------------------------
Date: 04/19/2023
Bug Fixes
* Fixed support for SOCKET_IFNAME config that was affecting EKS users at scale on large training jobs.
Neuron Collectives [2.12.22.0]
------------------------------
Date: 03/28/2023
New in this release
* Added support for TRN1N.
* Added support for 16 channels and 16 EFA devices, which is required for enabling EC2 TRN1N instances with Neuron.
* Added support for hierarchical All-Reduce and Reduce-Scatter. These implementations are now used by default and provides up to 75% reduction in latency for 2MB buffers across 256 ranks.
Neuron Collectives [2.11.47.0]
------------------------------
Date: 02/08/2023
New in this release
* Added support for Inf2.
Neuron Collectives [2.10.20.0]
-----------------------------
Date: 10/10/2022
New in this release
* Improved logging to appear similar in style to Neuron Runtime
Bug Fixes
* Fixed memory registration to support 2GB+ sizes
* Fixed association of network devices to channels (removes previous hard-coding).
Neuron Collectives [2.9.86.0]
-----------------------------
Date: 10/10/2022
New in this release
* Added support for All-Reduce, Reduce-Scatter, All-Gather, and Send/Recv operations.
</pre></body></html>
|
2023-09-29T20:54:58.187Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/compiler/neuronx-cc/index.rst.txt
|
```
.. _neuronx-cc-rn:
Neuron Compiler (``neuronx-cc``) release notes
==============================================
.. contents:: Table of Contents
:local:
:depth: 2
Neuron Compiler [2.10.0.35]
-----------------------------
Date: 09/26/2023
* This release addresses a compilation regression for certain configurations of Llama and Llama-2 inference models when it fails compilation with this error "IndirectLoad/Save requires contiguous indirect access per partition" .
There is still a known issue for some configurations of the model with the error "Too many instructions after unroll for function sg0000" . To mitigate this, please try with -O1 compiler option (or --optlevel 1) . A complete fix will be coming in the future release which will not require this option
Neuron Compiler [2.10.0.34]
-----------------------------
Date: 09/15/2023
* This release introduces a new ``--optlevel (-O)`` compiler option. This option allows the user to balance between compile-time and optimizations performed.
Three levels are supported. Level ``--optlevel 1 (-O1)`` aims to minimize compile-time and allow for a more rapid model development cycle. Model execution
time may be reduced. Level ``--optlevel 3 (-O3)`` performs whole-model optimization. This level will deliver the best performance however there will be longer
compile-times and the compiler will use more host DRAM, potentially requiring a larger instance to compile the model.
The default is ``--optlevel 2 (-O2)`` which provides a balance between model performance and compile time.
The previous ``—enable-experimental-O1`` flag introduced in the 02/08/2023 Neuron Compiler [2.4.0.21] release is now deprecated. Using this flag
will generate a message similar to:
WARNING: Option —enable-experimental-O1 is deprecated and will be removed in a future release." Use ``--optlevel 1 (-O1)`` instead.
Neuron Compiler [2.9.0.16]
-----------------------------
Date: 08/28/2023
* This release fixes an issue where any initial seed passed into the Random Number Generator operator was not honored. The RngBitGenerator operator now correctly accepts and uses setting the seed. Note that the current RNG implementation only supports 32-bit seeds.
Neuron Compiler [2.8.0.25]
-----------------------------
Date: 07/19/2023
* This release introduces a new optional ``--distribution_strategy`` compiler option. This option informs the compiler what type of distributed APIs are used to shard the model and allows the compiler to make API-specific optimizations. Currently following option-arguments are supported: ``nemo``.
Neuron Compiler [2.7.0.40]
-----------------------------
Date: 06/14/2023
* This release introduces a new ``--enable-saturate-infinity`` compiler option. A computation that can generate +/- infinity is at a high
risk of generating Not-a-Number (NaN) values when the infinity value is used in subsequent computations. This option helps avoid this
by converting +Inf/-Inf values to MAX/MIN_FLOAT before operations that could produce NaN values for +Inf/-Inf inputs on the target
architecture. While this option helps to avoid NaN values, there is a potential performance degradation that occurs during model
execution when this conversion is enabled.
Neuron Compiler [2.6.0.19]
-----------------------------
Date: 05/01/2023
* This release introduces a new ``model-type`` option argument: ``unet-inference``.
This option instructs the compiler to perform model-specific optimizations that produce executable models with improved performance
on the specified target instance.
* Added support for the HLO operator ``BitcastConvertType`` and also added support for ``TopK`` (sampling mode) operator.
Neuron Compiler [2.5.0.28]
-----------------------------
Date: 03/28/2023
* This release introduces the ``trn1n`` option argument to the compiler ``target`` option to specify that it should
generate code for a trn1n instance type. Example usage: ``neuronx-cc compile --target=trn1n ...``
* The compiler's usage message now includes the ``inf2`` option argument.
* A new 8-bit floating point data type, ``fp8_e4m3``, is now supported and can be specificed using the ``auto-cast-type`` option.
This instructs the compiler to convert the FP32 operations selected via the ``--auto-cast`` option to a signed FP8 size
with 4-bit exponent and 3-bit mantissa. Care must be taken to ensure that the down-casted values are representable within the 8-bit data range.
Neuron Compiler [2.4.0.21]
-----------------------------
Date: 02/24/2023
* This release introduces the ``inf2`` option argument to the compiler ``target`` option to specify that it should
generate code for an inf2 instance type. Example usage: ``neuronx-cc compile --target=inf2 ...``
The ``inf2`` option argument does not appear in the compiler's usage message. It will be added in the next release.
Neuron Compiler [2.4.0.21]
-----------------------------
Date: 02/08/2023
* Added support for the following HLO operators: ``SelectAndScatter``.
* EXPERIMENTAL: ``--enable-experimental-O1`` flag: This option reduces the compile-time with a neglible impact on model execution performance.
It allows the compiler to execute compiler passes in parallel to perform the compilation. By default the compiler uses 8 processes.
This can be changed via the CLI option ``--num-parallel-jobs``. This option is expected to become the default in a future SDK release.
Neuron Compiler [2.3.0.4]
-----------------------------
Date: 12/09/2022
* Added support for the following HLO operators: ``rev (reverse)``.
* The ``pow()`` function can now handle both integer and floating-point exponents.
* Optimization enhancements and bug fixes to improve model execution performance.
Neuron Compiler [2.2.0.73]
-----------------------------
Date: 10/27/2022
* Adding support for the following HLO operators: ``LogicalNot``, ``atan2`` and ``DynamicUpdateSlice`` (for constant index).
Neuron Compiler [2.1.0.76]
-----------------------------
Date: 10/5/2022
The Neuron Compiler is an Ahead-of-Time compiler that accelerates models for
execution on NeuronCores. This release supports compiling models for training
on a Trn1 instance using Pytorch Neuron. Users typically access the compiler via
the Framework to perform model compilation, although it can also be run
as a command line tool (*neuronx-cc*).
The Neuron Compiler supports compiling models for mixed precision calculations.
The trn1 hardware supports matrix multiplication using FP16, BF16, and FP32 on
its Matrix Multiplication Engine, and accumulations using FP32. Operators such as
activations or vector operations are supported using FP16, BF16, and FP32.
Tensor transpose can be accomplished in FP16, BF16, FP32, or TF32 datatypes.
By default, scalar and vector operations on FP32 values will be done in FP32,
while matrix multiplications are cast to BF16 and transpose operations are cast to FP32.
This default casting will generate the highest performance for a FP32 trained model.
By default, the compiler will target maximum performance by automatically casting
the model to mixed precision. It also provides an option (``--auto-cast``) that
allows the user to make tradeoffs between higher performance and optimal accuracy.
The decision on what option argument to use with the ``--auto-cast`` option will be
application specific. Compiler CLI options can be passed to the compiler via the framework.
Known issues
^^^^^^^^^^^^
- The Random Number Generator operation can be passed an initial seed
value, however setting the seed is not supported in this release.
- The exponent value of the pow() function must be a compile-time
integer constant.
- The compiler treats INT64 datatypes as INT32 by truncating the
high-order bits. If possible, cast these values to 32 bits .
- Model compilation time is proportional to the model size and
operators used. For some larger NLP models it may be upwards of 30
minutes.
Supported Operators
-------------------
The following XLA operators are supported by the Neuron Compiler.
Future releases will broaden model support by providing additional XLA operators defined in
https://www.tensorflow.org/xla/operation_semantics.
The list of supported operators can also be retrieved from the command line using :ref:`neuronx-cc list-operators<neuronx-cc-list-operators>`.
+-------------------------+-------------------------------------------+
| Supported XLA Operators | Notes |
+=========================+===========================================+
| Abs | |
+-------------------------+-------------------------------------------+
| Add | |
+-------------------------+-------------------------------------------+
| Allgather | |
+-------------------------+-------------------------------------------+
| Allreduce | |
+-------------------------+-------------------------------------------+
| Atan2 | |
+-------------------------+-------------------------------------------+
| Batchnorm | |
+-------------------------+-------------------------------------------+
| Batchnormgrad | |
+-------------------------+-------------------------------------------+
| Batchnorminference | |
+-------------------------+-------------------------------------------+
| BitcastConvertType | |
+-------------------------+-------------------------------------------+
| Broadcast | |
+-------------------------+-------------------------------------------+
| BroadcastInDim | |
+-------------------------+-------------------------------------------+
| Ceil | |
+-------------------------+-------------------------------------------+
| Clamp | |
+-------------------------+-------------------------------------------+
| Compare | |
+-------------------------+-------------------------------------------+
| Concatenate | |
+-------------------------+-------------------------------------------+
| Constant | |
+-------------------------+-------------------------------------------+
| ConstantLiteral | |
+-------------------------+-------------------------------------------+
| ConvertElementType | |
+-------------------------+-------------------------------------------+
| Cos | |
+-------------------------+-------------------------------------------+
| Customcall | |
+-------------------------+-------------------------------------------+
| Div | |
+-------------------------+-------------------------------------------+
| Dot | |
+-------------------------+-------------------------------------------+
| DotGeneral | |
+-------------------------+-------------------------------------------+
| DynamicUpdateSlice | Supports only for constant index |
+-------------------------+-------------------------------------------+
| Eq | |
+-------------------------+-------------------------------------------+
| Exp | |
+-------------------------+-------------------------------------------+
| Floor | |
+-------------------------+-------------------------------------------+
| Gather | Supports only disjoint start_index_map |
| | and remapped_offset_dims |
+-------------------------+-------------------------------------------+
| Ge | |
+-------------------------+-------------------------------------------+
| GetTupleElement | |
+-------------------------+-------------------------------------------+
| Gt | |
+-------------------------+-------------------------------------------+
| Iota | |
+-------------------------+-------------------------------------------+
| Le | |
+-------------------------+-------------------------------------------+
| Log | |
+-------------------------+-------------------------------------------+
| LogicalAnd | |
+-------------------------+-------------------------------------------+
| LogicalNot | |
+-------------------------+-------------------------------------------+
| Lt | |
+-------------------------+-------------------------------------------+
| Max | |
+-------------------------+-------------------------------------------+
| Min | |
+-------------------------+-------------------------------------------+
| Mul | |
+-------------------------+-------------------------------------------+
| Ne | |
+-------------------------+-------------------------------------------+
| Neg | |
+-------------------------+-------------------------------------------+
| Pad | |
+-------------------------+-------------------------------------------+
| Pow | Exponent argument must be a compile-time |
| | integer constant |
+-------------------------+-------------------------------------------+
| Reduce | Min, Max, Add and Mul are the only |
| | supported computations. Init_values must |
| | be constant |
+-------------------------+-------------------------------------------+
| Reshape | |
+-------------------------+-------------------------------------------+
| Rev (reverse) | |
+-------------------------+-------------------------------------------+
| RngBitGenerator | Ignores user seed |
+-------------------------+-------------------------------------------+
| RngUniform | |
+-------------------------+-------------------------------------------+
| Rsqrt | |
+-------------------------+-------------------------------------------+
| Scatter | |
+-------------------------+-------------------------------------------+
| Select | |
+-------------------------+-------------------------------------------+
| SelectAndScatter | |
+-------------------------+-------------------------------------------+
| ShiftRightLogical | |
+-------------------------+-------------------------------------------+
| Sign | |
+-------------------------+-------------------------------------------+
| Sin | |
+-------------------------+-------------------------------------------+
| Slice | |
+-------------------------+-------------------------------------------+
| Sqrt | |
+-------------------------+-------------------------------------------+
| Sub | |
+-------------------------+-------------------------------------------+
| Tanh | |
+-------------------------+-------------------------------------------+
| Transpose | |
+-------------------------+-------------------------------------------+
| Tuple | |
+-------------------------+-------------------------------------------+
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-cc-rn:
Neuron Compiler (``neuronx-cc``) release notes
==============================================
.. contents:: Table of Contents
:local:
:depth: 2
Neuron Compiler [2.10.0.35]
-----------------------------
Date: 09/26/2023
* This release addresses a compilation regression for certain configurations of Llama and Llama-2 inference models when it fails compilation with this error "IndirectLoad/Save requires contiguous indirect access per partition" .
There is still a known issue for some configurations of the model with the error "Too many instructions after unroll for function sg0000" . To mitigate this, please try with -O1 compiler option (or --optlevel 1) . A complete fix will be coming in the future release which will not require this option
Neuron Compiler [2.10.0.34]
-----------------------------
Date: 09/15/2023
* This release introduces a new ``--optlevel (-O)`` compiler option. This option allows the user to balance between compile-time and optimizations performed.
Three levels are supported. Level ``--optlevel 1 (-O1)`` aims to minimize compile-time and allow for a more rapid model development cycle. Model execution
time may be reduced. Level ``--optlevel 3 (-O3)`` performs whole-model optimization. This level will deliver the best performance however there will be longer
compile-times and the compiler will use more host DRAM, potentially requiring a larger instance to compile the model.
The default is ``--optlevel 2 (-O2)`` which provides a balance between model performance and compile time.
The previous ``—enable-experimental-O1`` flag introduced in the 02/08/2023 Neuron Compiler [2.4.0.21] release is now deprecated. Using this flag
will generate a message similar to:
WARNING: Option —enable-experimental-O1 is deprecated and will be removed in a future release." Use ``--optlevel 1 (-O1)`` instead.
Neuron Compiler [2.9.0.16]
-----------------------------
Date: 08/28/2023
* This release fixes an issue where any initial seed passed into the Random Number Generator operator was not honored. The RngBitGenerator operator now correctly accepts and uses setting the seed. Note that the current RNG implementation only supports 32-bit seeds.
Neuron Compiler [2.8.0.25]
-----------------------------
Date: 07/19/2023
* This release introduces a new optional ``--distribution_strategy`` compiler option. This option informs the compiler what type of distributed APIs are used to shard the model and allows the compiler to make API-specific optimizations. Currently following option-arguments are supported: ``nemo``.
Neuron Compiler [2.7.0.40]
-----------------------------
Date: 06/14/2023
* This release introduces a new ``--enable-saturate-infinity`` compiler option. A computation that can generate +/- infinity is at a high
risk of generating Not-a-Number (NaN) values when the infinity value is used in subsequent computations. This option helps avoid this
by converting +Inf/-Inf values to MAX/MIN_FLOAT before operations that could produce NaN values for +Inf/-Inf inputs on the target
architecture. While this option helps to avoid NaN values, there is a potential performance degradation that occurs during model
execution when this conversion is enabled.
Neuron Compiler [2.6.0.19]
-----------------------------
Date: 05/01/2023
* This release introduces a new ``model-type`` option argument: ``unet-inference``.
This option instructs the compiler to perform model-specific optimizations that produce executable models with improved performance
on the specified target instance.
* Added support for the HLO operator ``BitcastConvertType`` and also added support for ``TopK`` (sampling mode) operator.
Neuron Compiler [2.5.0.28]
-----------------------------
Date: 03/28/2023
* This release introduces the ``trn1n`` option argument to the compiler ``target`` option to specify that it should
generate code for a trn1n instance type. Example usage: ``neuronx-cc compile --target=trn1n ...``
* The compiler's usage message now includes the ``inf2`` option argument.
* A new 8-bit floating point data type, ``fp8_e4m3``, is now supported and can be specificed using the ``auto-cast-type`` option.
This instructs the compiler to convert the FP32 operations selected via the ``--auto-cast`` option to a signed FP8 size
with 4-bit exponent and 3-bit mantissa. Care must be taken to ensure that the down-casted values are representable within the 8-bit data range.
Neuron Compiler [2.4.0.21]
-----------------------------
Date: 02/24/2023
* This release introduces the ``inf2`` option argument to the compiler ``target`` option to specify that it should
generate code for an inf2 instance type. Example usage: ``neuronx-cc compile --target=inf2 ...``
The ``inf2`` option argument does not appear in the compiler's usage message. It will be added in the next release.
Neuron Compiler [2.4.0.21]
-----------------------------
Date: 02/08/2023
* Added support for the following HLO operators: ``SelectAndScatter``.
* EXPERIMENTAL: ``--enable-experimental-O1`` flag: This option reduces the compile-time with a neglible impact on model execution performance.
It allows the compiler to execute compiler passes in parallel to perform the compilation. By default the compiler uses 8 processes.
This can be changed via the CLI option ``--num-parallel-jobs``. This option is expected to become the default in a future SDK release.
Neuron Compiler [2.3.0.4]
-----------------------------
Date: 12/09/2022
* Added support for the following HLO operators: ``rev (reverse)``.
* The ``pow()`` function can now handle both integer and floating-point exponents.
* Optimization enhancements and bug fixes to improve model execution performance.
Neuron Compiler [2.2.0.73]
-----------------------------
Date: 10/27/2022
* Adding support for the following HLO operators: ``LogicalNot``, ``atan2`` and ``DynamicUpdateSlice`` (for constant index).
Neuron Compiler [2.1.0.76]
-----------------------------
Date: 10/5/2022
The Neuron Compiler is an Ahead-of-Time compiler that accelerates models for
execution on NeuronCores. This release supports compiling models for training
on a Trn1 instance using Pytorch Neuron. Users typically access the compiler via
the Framework to perform model compilation, although it can also be run
as a command line tool (*neuronx-cc*).
The Neuron Compiler supports compiling models for mixed precision calculations.
The trn1 hardware supports matrix multiplication using FP16, BF16, and FP32 on
its Matrix Multiplication Engine, and accumulations using FP32. Operators such as
activations or vector operations are supported using FP16, BF16, and FP32.
Tensor transpose can be accomplished in FP16, BF16, FP32, or TF32 datatypes.
By default, scalar and vector operations on FP32 values will be done in FP32,
while matrix multiplications are cast to BF16 and transpose operations are cast to FP32.
This default casting will generate the highest performance for a FP32 trained model.
By default, the compiler will target maximum performance by automatically casting
the model to mixed precision. It also provides an option (``--auto-cast``) that
allows the user to make tradeoffs between higher performance and optimal accuracy.
The decision on what option argument to use with the ``--auto-cast`` option will be
application specific. Compiler CLI options can be passed to the compiler via the framework.
Known issues
^^^^^^^^^^^^
- The Random Number Generator operation can be passed an initial seed
value, however setting the seed is not supported in this release.
- The exponent value of the pow() function must be a compile-time
integer constant.
- The compiler treats INT64 datatypes as INT32 by truncating the
high-order bits. If possible, cast these values to 32 bits .
- Model compilation time is proportional to the model size and
operators used. For some larger NLP models it may be upwards of 30
minutes.
Supported Operators
-------------------
The following XLA operators are supported by the Neuron Compiler.
Future releases will broaden model support by providing additional XLA operators defined in
https://www.tensorflow.org/xla/operation_semantics.
The list of supported operators can also be retrieved from the command line using :ref:`neuronx-cc list-operators<neuronx-cc-list-operators>`.
+-------------------------+-------------------------------------------+
| Supported XLA Operators | Notes |
+=========================+===========================================+
| Abs | |
+-------------------------+-------------------------------------------+
| Add | |
+-------------------------+-------------------------------------------+
| Allgather | |
+-------------------------+-------------------------------------------+
| Allreduce | |
+-------------------------+-------------------------------------------+
| Atan2 | |
+-------------------------+-------------------------------------------+
| Batchnorm | |
+-------------------------+-------------------------------------------+
| Batchnormgrad | |
+-------------------------+-------------------------------------------+
| Batchnorminference | |
+-------------------------+-------------------------------------------+
| BitcastConvertType | |
+-------------------------+-------------------------------------------+
| Broadcast | |
+-------------------------+-------------------------------------------+
| BroadcastInDim | |
+-------------------------+-------------------------------------------+
| Ceil | |
+-------------------------+-------------------------------------------+
| Clamp | |
+-------------------------+-------------------------------------------+
| Compare | |
+-------------------------+-------------------------------------------+
| Concatenate | |
+-------------------------+-------------------------------------------+
| Constant | |
+-------------------------+-------------------------------------------+
| ConstantLiteral | |
+-------------------------+-------------------------------------------+
| ConvertElementType | |
+-------------------------+-------------------------------------------+
| Cos | |
+-------------------------+-------------------------------------------+
| Customcall | |
+-------------------------+-------------------------------------------+
| Div | |
+-------------------------+-------------------------------------------+
| Dot | |
+-------------------------+-------------------------------------------+
| DotGeneral | |
+-------------------------+-------------------------------------------+
| DynamicUpdateSlice | Supports only for constant index |
+-------------------------+-------------------------------------------+
| Eq | |
+-------------------------+-------------------------------------------+
| Exp | |
+-------------------------+-------------------------------------------+
| Floor | |
+-------------------------+-------------------------------------------+
| Gather | Supports only disjoint start_index_map |
| | and remapped_offset_dims |
+-------------------------+-------------------------------------------+
| Ge | |
+-------------------------+-------------------------------------------+
| GetTupleElement | |
+-------------------------+-------------------------------------------+
| Gt | |
+-------------------------+-------------------------------------------+
| Iota | |
+-------------------------+-------------------------------------------+
| Le | |
+-------------------------+-------------------------------------------+
| Log | |
+-------------------------+-------------------------------------------+
| LogicalAnd | |
+-------------------------+-------------------------------------------+
| LogicalNot | |
+-------------------------+-------------------------------------------+
| Lt | |
+-------------------------+-------------------------------------------+
| Max | |
+-------------------------+-------------------------------------------+
| Min | |
+-------------------------+-------------------------------------------+
| Mul | |
+-------------------------+-------------------------------------------+
| Ne | |
+-------------------------+-------------------------------------------+
| Neg | |
+-------------------------+-------------------------------------------+
| Pad | |
+-------------------------+-------------------------------------------+
| Pow | Exponent argument must be a compile-time |
| | integer constant |
+-------------------------+-------------------------------------------+
| Reduce | Min, Max, Add and Mul are the only |
| | supported computations. Init_values must |
| | be constant |
+-------------------------+-------------------------------------------+
| Reshape | |
+-------------------------+-------------------------------------------+
| Rev (reverse) | |
+-------------------------+-------------------------------------------+
| RngBitGenerator | Ignores user seed |
+-------------------------+-------------------------------------------+
| RngUniform | |
+-------------------------+-------------------------------------------+
| Rsqrt | |
+-------------------------+-------------------------------------------+
| Scatter | |
+-------------------------+-------------------------------------------+
| Select | |
+-------------------------+-------------------------------------------+
| SelectAndScatter | |
+-------------------------+-------------------------------------------+
| ShiftRightLogical | |
+-------------------------+-------------------------------------------+
| Sign | |
+-------------------------+-------------------------------------------+
| Sin | |
+-------------------------+-------------------------------------------+
| Slice | |
+-------------------------+-------------------------------------------+
| Sqrt | |
+-------------------------+-------------------------------------------+
| Sub | |
+-------------------------+-------------------------------------------+
| Tanh | |
+-------------------------+-------------------------------------------+
| Transpose | |
+-------------------------+-------------------------------------------+
| Tuple | |
+-------------------------+-------------------------------------------+
</pre></body></html>
|
2023-09-29T20:54:58.204Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.rst.txt
|
```
.. _neuron-compiler-cli-reference-guide:
Neuron Compiler CLI Reference Guide (``neuronx-cc``)
====================================================
This document describes the command line interface of the Neuron Compiler.
This reference is not relevant for applications that run the Neuron Compiler from within a machine learning framework (:ref:`PyTorch-Neuron <pytorch-neuronx-programming-guide>` for example) since these options are passed from the framework directly to the compiler. Using the compiler command line may be desirable for applications that do not use a framework or customize existing frameworks. It is also possible to specify compiler options within the framework which will forward these options to the compiler using :ref:`NEURON_CC_FLAGS <pytorch-neuronx-envvars>`.
Usage
-----
*Optional parameters are shown in square brackets.*
.. _neuron_cli:
.. rubric:: Neuron Compiler Command-Line Interface
.. program:: neuronx-cc
.. option:: neuronx-cc <command> [parameters]
Common parameters for the Neuron CLI:
- :option:`--verbose <level>`: Specify the level of output produced by the compiler. (Default: ``warning``)
Valid values:
- ``info``: Informational messages regarding the progress of model compilation (written to stdout).
- ``warning``: Diagnostic messages that report model code that is not inherently erroneous but may be risky or suggest there may have been an error (written to stderr).
- ``error``: The compiler detected a condition causing it not complete the compilation successfully (written to stderr).
- ``critical``: The compiler encountered an unrecoverable error terminates immediately (written to stderr).
- ``debug``: Extensive information regarding the compiler's internal execution phases (written to stdout).
- :option:`--help`: Display a usage message of compiler options.
Use :option:`neuronx-cc <command> --help` for information on a specific command.
Available Commands:
~~~~~~~~~~~~~~~~~~~~~~~
- :option:`compile`
- :option:`list-operators`
.. _neuronx-cc-compile:
.. option:: neuronx-cc compile [parameters]
.. _description-1:
Compile a model for use on the AWS Machine Learning Accelerator.
.. code-block:: shell
neuronx-cc compile <model_files>
--framework <framework_name>
--target <instance_family>
[--model-type <model>]
[--auto-cast <cast_mode>]
[--auto-cast-type <data_type>]
[--distribution-strategy <distribution_type>]
[--optlevel <opt-level>], or [-O <opt-level>]
[--enable-saturate-infinity]
[--enable-fast-context-switch>]
[--enable-fast-loading-neuron-binaries]
[--logfile <filename>]
[--output <filename>]
*Compile Parameters:*
- :option:`<model_files>`: Input containing model specification.
The number of arguments required varies between frameworks:
- **XLA**: A local filename of a HLO file (hlo.pb) generated via XLA. See `hlo.proto <https://github.com/tensorflow/tensorflow/blob/73c8e20101ae93e9f5ff0b58f68be0b70eca44c5/tensorflow/compiler/xla/service/hlo.proto>`_ for the .proto description and `inspect-compiled-programs <https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/index.md#user-content-inspect-compiled-programs>`_ for more information on how to generate such files.
- :option:`--framework <framework_name>`: Framework used to generate training model.
Valid values:
- ``XLA``
- :option:`--target <instance_family>`: Name of the Neuron instance family on which the compiled model will be run.
Valid values:
- ``inf2``
- ``trn1``
- ``trn1n``
- :option:`--model-type <model>`: Permit the compiler to attempt model-specific optimizations based upon type of model being compiled. (Default: ``generic``)
Valid values:
- ``generic``: Perform optimizations applicable to all types of inference and training models.
- ``transformer``: Perform optimizations specific to `Transformer <https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)>`_ models.
- ``unet-inference``: Perform optimizations specific to certain `U-Net <https://en.wikipedia.org/wiki/U-Net>`_ model architectures when performing inference. U-Net models often have certain structures that result in excessive performance-impacting data transfers; this option allows the compiler to apply additional memory optimizations to prevent these data transfers and also allows the compiler to map larger normalization operators which would otherwise not successfully execute.
- :option:`--auto-cast <cast_mode>`: Controls how the compiler makes tradeoffs between performance and accuracy for FP32 operations. (Default: ``matmult``)
Valid values:
- ``matmult``: Only cast FP32 operations that use the Neuron matrix-multiplication engine.
- ``all``: Cast all FP32 operations to achieve highest performance. This option can potentially lower precision/accuracy.
- ``none``: Leave all data types as defined in the model. Do not apply auto-casting data type optimizations.
A more complete discussion on how to use this option and its arguments is in :ref:`Mixed Precision and Performance-accuracy Tuning for Training <neuronx-cc-training-mixed-precision>`.
.. note:: If the :option:`--auto-cast` option is specified, the :option:`--auto-cast-type` compiler flag can be optionally set to define which lower-precision data type the compiler should use.
- :option:`--auto-cast-type <data_type>`: When auto-cast mode is enabled, cast the FP32 operators to the lower-precision data type specified by this option. (Default: ``bf16``)
Valid values:
- ``bf16``: Cast the FP32 operations selected via the :option:`--auto-cast` option to BF16 to achieve highest performance and preserve dynamic range.
- ``fp16``: Cast the FP32 operations selected via the :option:`--auto-cast` option to FP16 to achieve improved performance relative to FP32 and increased precision relative to BF16.
- ``tf32``: Cast the FP32 operations selected via the :option:`--auto-cast` option to TensorFloat-32.
- ``fp8_e4m3``: Cast the FP32 operations selected via the :option:`--auto-cast` option to a signed 8-bit floating point represented as a 4-bit exponent and 3-bit mantissa.
.. note:: If multiple competing options are specified then the option right-most on the command line will supercede previous options.
- :option:`--distribution-strategy <distribution_type>`: Permit the compiler to attempt model-specific optimizations based upon type of model being compiled. (Default: ``generic``)
Valid values:
- ``NEMO``: Enable the compiler to perform optimizations applicable to models that use the `NeMo <https://github.com/NVIDIA/NeMo>`_ APIs to shard parameters, gradients, and optimizer states across data-parallel workers.
- :option:`--optlevel <opt_level>`: Specify the level of optimization the compiler should perform. Possible numeric values are {1, 2, 3}. (Default: ``2``)
Valid values:
- ``1``: enables the core performance optimizations in the compiler, while also minimizing compile time.
- ``2``: [default] provides the best balance between model performance and compile time.
- ``3``: may provide additional model execution performance but may incur longer compile times and higher host memory usage during model compilation.
.. note:: This option supercedes, and deprecates, the ``—enable-experimental-O1`` option introduced in an earlier release.
- :option:`--enable-saturate-infinity`: Convert +/- infinity values to MAX/MIN_FLOAT for certain computations that have a high risk of generating Not-a-Number (NaN) values. There is a potential performance impact during model execution when this conversion is enabled.
- :option:`--enable-fast-context-switch`: Optimize for faster model switching rather than execution latency.
This option will defer loading some weight constants until the start of model execution. This results in overall faster system performance when your application switches between models frequently on the same Neuron Core (or set of cores).
- :option:`--enable-fast-loading-neuron-binaries`: Save the compilation output file in an uncompressed format.
This creates executable files which are larger in size but faster for the Neuron Runtime to load into memory during model execution.
- :option:`--logfile <filename>`: Filename where compiler writes log messages. (Default: “log-neuron-cc.txt”).
- :option:`--output <filename>`: Filename where compilation output (NEFF archive) will be recorded. (Default: "file.neff”)
*Example*:
Compiling an XLA HLO:
.. code-block:: shell
neuronx-cc compile bert-model.hlo —-framework XLA -—target trn1 —-model-type transformer —-output bert.neff
.. _neuronx-cc-list-operators:
.. option:: neuronx-cc list-operators [parameters]
.. _description-1:
Returns a newline (‘\\n’) separated list of operators supported by the Neuron Compiler.
.. code-block:: shell
neuronx-cc list-operators
--framework <value>
*List-Operators Parameters:*
- :option:`--framework <framework_name>`: Framework in which the operators were registered.
Valid values:
- ``XLA``: Operator names will be formatted according to the value used by XLA compiler in XlaBuilder.
*Example*:
.. code-block:: shell
neuronx-cc list-operators —framework XLA
...
*Exit Statuses*:
- **0**: Compilation succeeded
- **<>0**: An error occurred during compilation.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-compiler-cli-reference-guide:
Neuron Compiler CLI Reference Guide (``neuronx-cc``)
====================================================
This document describes the command line interface of the Neuron Compiler.
This reference is not relevant for applications that run the Neuron Compiler from within a machine learning framework (:ref:`PyTorch-Neuron <pytorch-neuronx-programming-guide>` for example) since these options are passed from the framework directly to the compiler. Using the compiler command line may be desirable for applications that do not use a framework or customize existing frameworks. It is also possible to specify compiler options within the framework which will forward these options to the compiler using :ref:`NEURON_CC_FLAGS <pytorch-neuronx-envvars>`.
Usage
-----
*Optional parameters are shown in square brackets.*
.. _neuron_cli:
.. rubric:: Neuron Compiler Command-Line Interface
.. program:: neuronx-cc
.. option:: neuronx-cc <command> [parameters]
Common parameters for the Neuron CLI:
- :option:`--verbose <level>`: Specify the level of output produced by the compiler. (Default: ``warning``)
Valid values:
- ``info``: Informational messages regarding the progress of model compilation (written to stdout).
- ``warning``: Diagnostic messages that report model code that is not inherently erroneous but may be risky or suggest there may have been an error (written to stderr).
- ``error``: The compiler detected a condition causing it not complete the compilation successfully (written to stderr).
- ``critical``: The compiler encountered an unrecoverable error terminates immediately (written to stderr).
- ``debug``: Extensive information regarding the compiler's internal execution phases (written to stdout).
- :option:`--help`: Display a usage message of compiler options.
Use :option:`neuronx-cc <command> --help` for information on a specific command.
Available Commands:
~~~~~~~~~~~~~~~~~~~~~~~
- :option:`compile`
- :option:`list-operators`
.. _neuronx-cc-compile:
.. option:: neuronx-cc compile [parameters]
.. _description-1:
Compile a model for use on the AWS Machine Learning Accelerator.
.. code-block:: shell
neuronx-cc compile <model_files>
--framework <framework_name>
--target <instance_family>
[--model-type <model>]
[--auto-cast <cast_mode>]
[--auto-cast-type <data_type>]
[--distribution-strategy <distribution_type>]
[--optlevel <opt-level>], or [-O <opt-level>]
[--enable-saturate-infinity]
[--enable-fast-context-switch>]
[--enable-fast-loading-neuron-binaries]
[--logfile <filename>]
[--output <filename>]
*Compile Parameters:*
- :option:`<model_files>`: Input containing model specification.
The number of arguments required varies between frameworks:
- **XLA**: A local filename of a HLO file (hlo.pb) generated via XLA. See `hlo.proto <https://github.com/tensorflow/tensorflow/blob/73c8e20101ae93e9f5ff0b58f68be0b70eca44c5/tensorflow/compiler/xla/service/hlo.proto>`_ for the .proto description and `inspect-compiled-programs <https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/index.md#user-content-inspect-compiled-programs>`_ for more information on how to generate such files.
- :option:`--framework <framework_name>`: Framework used to generate training model.
Valid values:
- ``XLA``
- :option:`--target <instance_family>`: Name of the Neuron instance family on which the compiled model will be run.
Valid values:
- ``inf2``
- ``trn1``
- ``trn1n``
- :option:`--model-type <model>`: Permit the compiler to attempt model-specific optimizations based upon type of model being compiled. (Default: ``generic``)
Valid values:
- ``generic``: Perform optimizations applicable to all types of inference and training models.
- ``transformer``: Perform optimizations specific to `Transformer <https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)>`_ models.
- ``unet-inference``: Perform optimizations specific to certain `U-Net <https://en.wikipedia.org/wiki/U-Net>`_ model architectures when performing inference. U-Net models often have certain structures that result in excessive performance-impacting data transfers; this option allows the compiler to apply additional memory optimizations to prevent these data transfers and also allows the compiler to map larger normalization operators which would otherwise not successfully execute.
- :option:`--auto-cast <cast_mode>`: Controls how the compiler makes tradeoffs between performance and accuracy for FP32 operations. (Default: ``matmult``)
Valid values:
- ``matmult``: Only cast FP32 operations that use the Neuron matrix-multiplication engine.
- ``all``: Cast all FP32 operations to achieve highest performance. This option can potentially lower precision/accuracy.
- ``none``: Leave all data types as defined in the model. Do not apply auto-casting data type optimizations.
A more complete discussion on how to use this option and its arguments is in :ref:`Mixed Precision and Performance-accuracy Tuning for Training <neuronx-cc-training-mixed-precision>`.
.. note:: If the :option:`--auto-cast` option is specified, the :option:`--auto-cast-type` compiler flag can be optionally set to define which lower-precision data type the compiler should use.
- :option:`--auto-cast-type <data_type>`: When auto-cast mode is enabled, cast the FP32 operators to the lower-precision data type specified by this option. (Default: ``bf16``)
Valid values:
- ``bf16``: Cast the FP32 operations selected via the :option:`--auto-cast` option to BF16 to achieve highest performance and preserve dynamic range.
- ``fp16``: Cast the FP32 operations selected via the :option:`--auto-cast` option to FP16 to achieve improved performance relative to FP32 and increased precision relative to BF16.
- ``tf32``: Cast the FP32 operations selected via the :option:`--auto-cast` option to TensorFloat-32.
- ``fp8_e4m3``: Cast the FP32 operations selected via the :option:`--auto-cast` option to a signed 8-bit floating point represented as a 4-bit exponent and 3-bit mantissa.
.. note:: If multiple competing options are specified then the option right-most on the command line will supercede previous options.
- :option:`--distribution-strategy <distribution_type>`: Permit the compiler to attempt model-specific optimizations based upon type of model being compiled. (Default: ``generic``)
Valid values:
- ``NEMO``: Enable the compiler to perform optimizations applicable to models that use the `NeMo <https://github.com/NVIDIA/NeMo>`_ APIs to shard parameters, gradients, and optimizer states across data-parallel workers.
- :option:`--optlevel <opt_level>`: Specify the level of optimization the compiler should perform. Possible numeric values are {1, 2, 3}. (Default: ``2``)
Valid values:
- ``1``: enables the core performance optimizations in the compiler, while also minimizing compile time.
- ``2``: [default] provides the best balance between model performance and compile time.
- ``3``: may provide additional model execution performance but may incur longer compile times and higher host memory usage during model compilation.
.. note:: This option supercedes, and deprecates, the ``—enable-experimental-O1`` option introduced in an earlier release.
- :option:`--enable-saturate-infinity`: Convert +/- infinity values to MAX/MIN_FLOAT for certain computations that have a high risk of generating Not-a-Number (NaN) values. There is a potential performance impact during model execution when this conversion is enabled.
- :option:`--enable-fast-context-switch`: Optimize for faster model switching rather than execution latency.
This option will defer loading some weight constants until the start of model execution. This results in overall faster system performance when your application switches between models frequently on the same Neuron Core (or set of cores).
- :option:`--enable-fast-loading-neuron-binaries`: Save the compilation output file in an uncompressed format.
This creates executable files which are larger in size but faster for the Neuron Runtime to load into memory during model execution.
- :option:`--logfile <filename>`: Filename where compiler writes log messages. (Default: “log-neuron-cc.txt”).
- :option:`--output <filename>`: Filename where compilation output (NEFF archive) will be recorded. (Default: "file.neff”)
*Example*:
Compiling an XLA HLO:
.. code-block:: shell
neuronx-cc compile bert-model.hlo —-framework XLA -—target trn1 —-model-type transformer —-output bert.neff
.. _neuronx-cc-list-operators:
.. option:: neuronx-cc list-operators [parameters]
.. _description-1:
Returns a newline (‘\\n’) separated list of operators supported by the Neuron Compiler.
.. code-block:: shell
neuronx-cc list-operators
--framework <value>
*List-Operators Parameters:*
- :option:`--framework <framework_name>`: Framework in which the operators were registered.
Valid values:
- ``XLA``: Operator names will be formatted according to the value used by XLA compiler in XlaBuilder.
*Example*:
.. code-block:: shell
neuronx-cc list-operators —framework XLA
...
*Exit Statuses*:
- **0**: Compilation succeeded
- **<>0**: An error occurred during compilation.
</pre></body></html>
|
2023-09-29T20:54:58.214Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.rst.txt
|
```
.. _neuronx-cc-training-mixed-precision:
Mixed Precision and Performance-accuracy Tuning (``neuronx-cc``)
================================================================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
The Neuron Compiler supports machine learning models with FP32, TF32, FP16 and BF16 (Bfloat16) tensors and operators. The Neuron hardware supports a mix of 32, 16, and 8 bit datatypes. This guide explains how to apply the available auto-cast methods and their performance / accuracy trade-offs when compiling a model with Neuron.
.. note:: Neuron Compiler support for INT8 is planned for a future Neuron SDK release. See `Neuron Compiler: Enable Neuron INT8 support <https://github.com/aws/aws-neuron-sdk/issues/36>`_ for details.
Neuron Hardware
---------------
The Neuron v2 hardware supports matrix multiplication using FP16, BF16, TF32, and FP32 on its matrix multiply ("matmult") engine, and accumulations using FP32. Operators such as activations or vector operations are supported using FP32, TF32, FP16, and BF16. Supporting FP16 and BF16 allows Neuron to have significantly higher performance than executing everything as FP32.
Performance-accuracy tradeoffs
------------------------------
**By default**, the Neuron Compiler will **automatically cast FP32 matrix multiplication operations to BF16**. The remaining operations are performed in the data type specified by the model. The Neuron Compiler provides CLI options that direct the compiler to cast to other data types, thereby giving the ability to choose an accuracy-to-performance tradeoff in model execution. Deciding what CLI settings to use will be application specific and may require some experimentation. See :ref:`Neuron Compiler CLI Reference Guide<neuron-compiler-cli-reference-guide>` for details.
What is the difference between Data Types?
-------------------------------------------
The NeuronCore v2 support multiple data types (see :ref:`NeuronCore v2 Data Types<neuron-data-types-v2>`). Each data type provides benefits and drawbacks due to its dynamic range and numeric precision.
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| Type | Minimum | Maximum | Strength | Weakness |
+======+===========+==========+========================================================+===================================================+
| FP16 | -65504 | 65504 | Numeric Precision, High granularity, Mid-range numbers | Low range, medium precision |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| BF16 | -3.40E+38 | 3.40E+38 | Dynamic Range, Extremely small/large numbers | Low precision |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| TF32 | -3.40E+38 | 3.40E+38 | Dynamic Range, Extremely small/large numbers | Medium precision |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| FP32 | -3.40E+38 | 3.40E+38 | N/A | Larger model size, potentially slower computation |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
* FP16 provides a high density of representable values that are neither extremely small or extremely large. The density of representable values within the range is approximately an order of magnitude greater than BF16.
* Conversion from FP32 to FP16 will perform well when values are relatively small but non-extreme (either very small or very large).
* Conversion from FP32 to FP16 will perform badly if the original FP32 values are outside of the range of FP16. This will produce inf/-inf values and may result in NaN depending on the operation.
* BF16 provides a wider range of representable values which includes both very small and very large values. However, the overall density of representable values is usually lower than FP16 for more non-extreme values. The range is nearly identical to the range of FP32 but because the number of bits is halved, this means the individual values are sparse.
* Conversion from FP32 to BF16 will perform well when the values are well-distributed throughout the range. Since BF16 covers the entire FP32 range, this means each original value can map to a relatively close downcast value.
* Conversion from FP32 to BF16 will perform badly when fine granularity is needed. Since BF16 granularity is sacrificed for greater range it will almost always map worse to values that are within the FP16 range.
Should I downcast operations to smaller Data Types?
---------------------------------------------------
This choice here is driven entirely by accuracy vs performance tradeoff. Casting operations to smaller 16-bit data types will provide a significant performance benefit but may end up sacrificing accuracy.
The compiler uses BF16 casting **by default** for matrix multiplication operations. The speedup from casting operations gives a significant performance boost and the range of representable values in BF16 allows for more safety compared to FP16 when the possible numeric range of input values is unknown.
The Neuron Compiler’s :option:`--auto-cast` and :option:`--auto-cast-type` CLI options are used to direct the compiler to perform alternate casting operations. See the detailed list of the options in :ref:`Neuron v2 Compiler CLI Reference Guide<neuron-compiler-cli-reference-guide>`.
It is recommended that you start with compiling the model to achieve high performance (default), you can then test the accuracy of the application and, if needed, try the next higher precision casting option until the desired accuracy and performance are achieved.
The option combinations to consider in a typical flow are:
+---------------------------------------------------------+--------------------------------------------------------------------------+-----------------------------------------------------+-------------------------------------------------+
| Compiler autocast | Options Effect | Performance | Accuracy |
+=========================================================+==========================================================================+=====================================================+=================================================+
| ``--auto-cast all --auto-cast-type bf16`` | Best performance at the expense of precision | Performance *decreases* as you move down the table | Accuracy *increases* as you move down the table |
+---------------------------------------------------------+ + | |
| ``--auto-cast matmult --auto-cast-type bf16`` (default) | | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast all —-auto-cast-type fp16`` | Best performance at the expense of dynamic range | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast matmult --auto-cast-type fp16`` | | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast all —-auto-cast-type tf32`` | Balance of performance, dynamic range, and precision | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast matmult --auto-cast-type tf32`` | | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast none`` | Disables all auto-casting, using the data types defined within the model | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+-----------------------------------------------------+-------------------------------------------------+
Note that compiler has to preserve the input/output (i/o) tensor types requested by Framework, therefore no casting is done on the i/o tensors. Additional speedup can be obtained by casting them in the Framework prior to compilation.
To learn how to configure the compiler options from within your application’s framework, please see:
* :ref:`Developer Guide for Training with PyTorch Neuron <pytorch-neuronx-programming-guide>`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-cc-training-mixed-precision:
Mixed Precision and Performance-accuracy Tuning (``neuronx-cc``)
================================================================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
The Neuron Compiler supports machine learning models with FP32, TF32, FP16 and BF16 (Bfloat16) tensors and operators. The Neuron hardware supports a mix of 32, 16, and 8 bit datatypes. This guide explains how to apply the available auto-cast methods and their performance / accuracy trade-offs when compiling a model with Neuron.
.. note:: Neuron Compiler support for INT8 is planned for a future Neuron SDK release. See `Neuron Compiler: Enable Neuron INT8 support <https://github.com/aws/aws-neuron-sdk/issues/36>`_ for details.
Neuron Hardware
---------------
The Neuron v2 hardware supports matrix multiplication using FP16, BF16, TF32, and FP32 on its matrix multiply ("matmult") engine, and accumulations using FP32. Operators such as activations or vector operations are supported using FP32, TF32, FP16, and BF16. Supporting FP16 and BF16 allows Neuron to have significantly higher performance than executing everything as FP32.
Performance-accuracy tradeoffs
------------------------------
**By default**, the Neuron Compiler will **automatically cast FP32 matrix multiplication operations to BF16**. The remaining operations are performed in the data type specified by the model. The Neuron Compiler provides CLI options that direct the compiler to cast to other data types, thereby giving the ability to choose an accuracy-to-performance tradeoff in model execution. Deciding what CLI settings to use will be application specific and may require some experimentation. See :ref:`Neuron Compiler CLI Reference Guide<neuron-compiler-cli-reference-guide>` for details.
What is the difference between Data Types?
-------------------------------------------
The NeuronCore v2 support multiple data types (see :ref:`NeuronCore v2 Data Types<neuron-data-types-v2>`). Each data type provides benefits and drawbacks due to its dynamic range and numeric precision.
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| Type | Minimum | Maximum | Strength | Weakness |
+======+===========+==========+========================================================+===================================================+
| FP16 | -65504 | 65504 | Numeric Precision, High granularity, Mid-range numbers | Low range, medium precision |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| BF16 | -3.40E+38 | 3.40E+38 | Dynamic Range, Extremely small/large numbers | Low precision |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| TF32 | -3.40E+38 | 3.40E+38 | Dynamic Range, Extremely small/large numbers | Medium precision |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
| FP32 | -3.40E+38 | 3.40E+38 | N/A | Larger model size, potentially slower computation |
+------+-----------+----------+--------------------------------------------------------+---------------------------------------------------+
* FP16 provides a high density of representable values that are neither extremely small or extremely large. The density of representable values within the range is approximately an order of magnitude greater than BF16.
* Conversion from FP32 to FP16 will perform well when values are relatively small but non-extreme (either very small or very large).
* Conversion from FP32 to FP16 will perform badly if the original FP32 values are outside of the range of FP16. This will produce inf/-inf values and may result in NaN depending on the operation.
* BF16 provides a wider range of representable values which includes both very small and very large values. However, the overall density of representable values is usually lower than FP16 for more non-extreme values. The range is nearly identical to the range of FP32 but because the number of bits is halved, this means the individual values are sparse.
* Conversion from FP32 to BF16 will perform well when the values are well-distributed throughout the range. Since BF16 covers the entire FP32 range, this means each original value can map to a relatively close downcast value.
* Conversion from FP32 to BF16 will perform badly when fine granularity is needed. Since BF16 granularity is sacrificed for greater range it will almost always map worse to values that are within the FP16 range.
Should I downcast operations to smaller Data Types?
---------------------------------------------------
This choice here is driven entirely by accuracy vs performance tradeoff. Casting operations to smaller 16-bit data types will provide a significant performance benefit but may end up sacrificing accuracy.
The compiler uses BF16 casting **by default** for matrix multiplication operations. The speedup from casting operations gives a significant performance boost and the range of representable values in BF16 allows for more safety compared to FP16 when the possible numeric range of input values is unknown.
The Neuron Compiler’s :option:`--auto-cast` and :option:`--auto-cast-type` CLI options are used to direct the compiler to perform alternate casting operations. See the detailed list of the options in :ref:`Neuron v2 Compiler CLI Reference Guide<neuron-compiler-cli-reference-guide>`.
It is recommended that you start with compiling the model to achieve high performance (default), you can then test the accuracy of the application and, if needed, try the next higher precision casting option until the desired accuracy and performance are achieved.
The option combinations to consider in a typical flow are:
+---------------------------------------------------------+--------------------------------------------------------------------------+-----------------------------------------------------+-------------------------------------------------+
| Compiler autocast | Options Effect | Performance | Accuracy |
+=========================================================+==========================================================================+=====================================================+=================================================+
| ``--auto-cast all --auto-cast-type bf16`` | Best performance at the expense of precision | Performance *decreases* as you move down the table | Accuracy *increases* as you move down the table |
+---------------------------------------------------------+ + | |
| ``--auto-cast matmult --auto-cast-type bf16`` (default) | | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast all —-auto-cast-type fp16`` | Best performance at the expense of dynamic range | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast matmult --auto-cast-type fp16`` | | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast all —-auto-cast-type tf32`` | Balance of performance, dynamic range, and precision | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast matmult --auto-cast-type tf32`` | | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+ | |
| ``--auto-cast none`` | Disables all auto-casting, using the data types defined within the model | | |
+---------------------------------------------------------+--------------------------------------------------------------------------+-----------------------------------------------------+-------------------------------------------------+
Note that compiler has to preserve the input/output (i/o) tensor types requested by Framework, therefore no casting is done on the i/o tensors. Additional speedup can be obtained by casting them in the Framework prior to compilation.
To learn how to configure the compiler options from within your application’s framework, please see:
* :ref:`Developer Guide for Training with PyTorch Neuron <pytorch-neuronx-programming-guide>`
</pre></body></html>
|
2023-09-29T20:54:58.234Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuronx-cc/developer-guide.rst.txt
|
```
Developer Guide
===================
.. toctree::
:maxdepth: 1
/general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Developer Guide
===================
.. toctree::
:maxdepth: 1
/general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision</pre></body></html>
|
2023-09-29T20:54:58.241Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuron-cc/command-line-reference.rst.txt
|
```
.. _neuron-compiler-cli-reference:
Neuron compiler CLI Reference Guide (``neuron-cc``)
===================================================
This document describes the command line interface of the Neuron
compiler. This reference is not relevant for applications that run
neuron-cc from within a machine learning framework (TensorFlow-Neuron
for example) since these options are passed from the framework directly
to neuron-cc.
Using neuron-cc on the command line may be desirable for applications
that do not use a framework, or customize existing frameworks. It is
also possible to supply CLI commands to the framework as options to be
passed through to the compiler.
Usage
--------
Optional parameters are shown in square brackets. See the individual
framework guides for the correct syntax.
.. _neuron_cli:
.. rubric:: Neuron Compiler CLI
.. program:: neuron-cc
.. option:: neuron-cc [options] <command> [parameters]
Common options for the Neuron CLI:
- :option:`--verbose` (string) default=“WARN”:
Valid values:
- :option:`DEBUG`
- :option:`INFO`
- :option:`WARN`
- :option:`ERROR`
Use :option:`neuron-cc <command> --help` for information on a specific command.
Available Commands:
~~~~~~~~~~~~~~~~~~~
- :option:`compile`
- :option:`list-operators`
.. option:: neuron-cc compile [parameters]
Compile a model for use on the AWS Inferentia Machine Learning Accelerator.
.. code-block::
neuron-cc compile <file names> --framework <value> --io-config <value> [--neuroncore-pipeline-cores <value>] [--enable-saturate-infinity] [--enable-fast-loading-neuron-binaries] [--enable-fast-context-switch] [--fp32-cast cast-method] [--fast-math cast-method] [--output <value>]
**Compile Parameters:**
- :option:`<file names>`: Input containing model specification. The number
of arguments required varies between frameworks:
- **TENSORFLOW**: A local filename or URI of a TensorFlow Frozen
GraphDef (.pb); or the name of a local directory containing a
TensorFlow SavedModel.
See
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/graph.proto
for the associated .proto schema for TensorFlow Frozen GraphDefs. See
https://www.tensorflow.org/guide/saved_model for more information on
the SavedModel format.
- **MXNET**: List of local filenames or URIs where input architecture
.json file and parameter .param file are stored. These contains
information related to the architecture of your graph and associated
parameters, respectively.
- :option:`--framework` (string): Framework in which the model was trained.
Valid values:
- :option:`TENSORFLOW`
- :option:`MXNET`
- :option:`XLA`
- :option:`--neuroncore-pipeline-cores` (int) (default=1): Number of neuron cores
to be used in "NeuronCore Pipeline" mode. This is different from data
parallel deployment (same model on multiple neuron cores). Refer to
Runtime/Framework documentation for data parallel deployment options.
Compile for the given number of
neuron cores so as to leverage NeuronCore Pipeline mode.
.. note::
This is not used to define the number of Neuron Cores to be used in a data
parallel deployment (ie the same model on multiple Neuron Cores). That
is a runtime/framework configuration choice.
- :option:`--output` (string) (default=“out.neff”): Filename where compilation
output (NEFF archive) will be recorded.
- :option:`--io-config` (string): Configuration containing the names and shapes
of input and output tensors.
The io-config can be specified as a local filename, a URI, or a string
containing the io-config itself.
The io-config must be formatted as a JSON object with two members
“inputs” and “outputs”. “inputs” is an object mapping input tensor names
to an array of shape and data type. “outputs” is an array of output
tensor names. Consider the following example:
.. code-block:: json
{
"inputs": {
"input0:0": [[1,100,100,3], "float16"],
"input1:0": [[1,100,100,3], "float16"]
},
"outputs": ["output:0"]
}
- :option:`--enable-saturate-infinity` : Convert +/- infinity values to MAX/MIN_FLOAT for certain computations that have a high risk of generating Not-a-Number (NaN) values. There is a potential performance impact during model execution when this conversion is enabled.
- :option:`--enable-fast-loading-neuron-binaries` : Write the compilation
output (NEFF archive) in uncompressed format which results
in faster loading of the archive during inference.
- :option:`--enable-fast-context-switch` : Optimize for faster model switching
rather than inference latency. This results in overall faster system
performance when your application switches between models frequently
on the same neuron core (or set of cores). The optimization
triggered by this option for example defers loading some weight
constants until the start of inference.
- :option:`--fast-math` : Controls tradeoff between performance and accuracy for fp32 operators. See more suggestions on how to use this option with the below arguments in :ref:`neuron-cc-training-mixed-precision`.
- ``all`` (Default): enables all optimizations that improve performance. This option can potentially lower precision/accuracy.
- ``none`` : Disables all optimizations that improve performance. This option will provide best precision/accuracy.
- Tensor transpose options
- ``fast-relayout``: Only enables fast relayout optimization to improve performance by using the matrix multiplier for tensor transpose. The data type used for the transpose is either FP16 or BF16, which is controlled by the ``fp32-cast-xxx`` keyword.
- ``no-fast-relayout``: Disables fast relayout optimization which ensures that tensor transpose is bit-accurate (lossless) but slightly slower.
- Casting options
- ``fp32-cast-all`` (Default): Cast all FP32 operators to BF16 to achieve highest performance and preserve dynamic range. Same as setting ``--fp32-cast all``.
- ``fp32-cast-all-fp16``: Cast all FP32 operators to FP16 to achieve speed up and increase precision versus BF16. Same setting as ``--fp32-cast all-fp16``.
- ``fp32-cast-matmult``: Only cast FP32 operators that use Neuron Matmult engine to BF16 while using FP16 for matmult-based transpose to get better accuracy. Same as setting ``--fp32-cast matmult``.
- ``fp32-cast-matmult-bf16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to BF16 to preserve dynamic range. Same as setting ``--fp32-cast matmult-bf16``.
- ``fp32-cast-matmult-fp16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to fp16 to better preserve precision. Same as setting ``--fp32-cast matmult-fp16``.
.. important ::
* ``all`` and ``none`` are mutually exclusive
* ``all`` is equivalent to using ``fp32-cast-all fast-relayout`` (best performance)
* ``none`` is equivalent to using ``fp32-cast-matmult-bf16 no-fast-relayout`` (best accuracy)
* ``fp32-cast-*`` options are mutually exclusive
* ``fast-relayout`` and ``no-fast-relayout`` are mutually exclusive
* The ``fp32-cast-*`` and ``*-fast-relayout`` options will overwrite the default behavior in ``all`` and ``none``.
* For backward compatibility, the ``--fp32-cast`` option has higher priority over ``--fast-math``. It will overwrite the FP32 casting options in any of the ``--fast-math`` options if ``--fp32-cast`` option is present explicitly.
- :option:`--fp32-cast` : Refine the automatic casting of fp32 tensors. This is being replaced by a newer --fast-math.
.. important ::
* ``--fp32-cast`` option is being deprecated and ``--fast-math`` will replace it in future releases.
* ``--fast-math`` is introducing the ``no-fast-relayout`` option to enable lossless transpose operation.
The ``--fp32-cast`` is an interface for controlling the performance and accuracy tradeoffs. Many of the ``--fast-math`` values invoke (override) it.
- ``all`` (default): Cast all FP32 operators to BF16 to achieve speed up and preserve dynamic range.
- ``matmult``: Cast only FP32 operators that use Neuron Matmult engine to BF16 while using fp16 for matmult-based transpose to get better accuracy.
- ``matmult-fp16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to fp16 to better preserve precision.
- ``matmult-bf16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to BF16 to preserve dynamic range.
- ``all-fp16``: Cast all FP32 operators to FP16 to achieve speed up and better preserve precision.
**Log Levels:**
Logs at levels “trace”, “debug”, and “info” will be written to STDOUT.
Logs at levels “warn”, “error”, and “fatal” will be written to STDERR.
**Exit Status**
**0** - Compilation succeeded
**>0** - An error occurred during compilation.
**Examples**
Compiling a saved TensorFlow model:
.. code-block:: shell
neuron-cc compile test_graph_tfmatmul.pb --framework TENSORFLOW --io-config test_graph_tfmatmul.config
Compiling a MXNet model:
.. code-block:: shell
neuron-cc compile lenet-symbol.json lenet-0001.params --framework MXNET --neuroncore-pipeline-cores 2 --output file.neff
Compiling an XLA HLO:
.. code-block:: shell
neuron-cc compile bert-model.hlo --framework XLA --output file.neff
.. _neuron-cc-list-operators:
.. option:: neuron-cc list-operators [parameters]
.. _description-1:
Returns a newline ('n') separated list of operators supported by the NeuronCore.
- **TENSORFLOW**: Operators will be formatted according to the value
passed to the associated REGISTER_OP(“OperatorName”) macro.
See https://www.tensorflow.org/guide/create_op#define_the_op_interface
for more information regarding operator registration in TensorFlow.
- **MXNET**: Operator names will be formatted according to the value
passed to the associated NNVM_REGISTER_OP(operator_name) macro.
- **XLA**: Operator names will be formatted according to the value used by XLA compiler in XlaBuilder.
See https://www.tensorflow.org/xla/operation_semantics for more information regarding XLA operator semantics in XLA interface.
.. code-block:: shell
neuron-cc list-operators --framework <value>
.. _options-1:
- :option:`--framework` (string): Framework in which the operators were
registered.
Valid values:
- :option:`TENSORFLOW`
- :option:`MXNET`
- :option:`XLA`
**Exit Status**
**0** - Call succeeded
**>0** - An error occurred
**Example**
.. code-block:: shell
$ neuron-cc list-operators --framework TENSORFLOW
AddN
AdjustContrastv2
CheckNumbers
...
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-compiler-cli-reference:
Neuron compiler CLI Reference Guide (``neuron-cc``)
===================================================
This document describes the command line interface of the Neuron
compiler. This reference is not relevant for applications that run
neuron-cc from within a machine learning framework (TensorFlow-Neuron
for example) since these options are passed from the framework directly
to neuron-cc.
Using neuron-cc on the command line may be desirable for applications
that do not use a framework, or customize existing frameworks. It is
also possible to supply CLI commands to the framework as options to be
passed through to the compiler.
Usage
--------
Optional parameters are shown in square brackets. See the individual
framework guides for the correct syntax.
.. _neuron_cli:
.. rubric:: Neuron Compiler CLI
.. program:: neuron-cc
.. option:: neuron-cc [options] <command> [parameters]
Common options for the Neuron CLI:
- :option:`--verbose` (string) default=“WARN”:
Valid values:
- :option:`DEBUG`
- :option:`INFO`
- :option:`WARN`
- :option:`ERROR`
Use :option:`neuron-cc <command> --help` for information on a specific command.
Available Commands:
~~~~~~~~~~~~~~~~~~~
- :option:`compile`
- :option:`list-operators`
.. option:: neuron-cc compile [parameters]
Compile a model for use on the AWS Inferentia Machine Learning Accelerator.
.. code-block::
neuron-cc compile <file names> --framework <value> --io-config <value> [--neuroncore-pipeline-cores <value>] [--enable-saturate-infinity] [--enable-fast-loading-neuron-binaries] [--enable-fast-context-switch] [--fp32-cast cast-method] [--fast-math cast-method] [--output <value>]
**Compile Parameters:**
- :option:`<file names>`: Input containing model specification. The number
of arguments required varies between frameworks:
- **TENSORFLOW**: A local filename or URI of a TensorFlow Frozen
GraphDef (.pb); or the name of a local directory containing a
TensorFlow SavedModel.
See
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/framework/graph.proto
for the associated .proto schema for TensorFlow Frozen GraphDefs. See
https://www.tensorflow.org/guide/saved_model for more information on
the SavedModel format.
- **MXNET**: List of local filenames or URIs where input architecture
.json file and parameter .param file are stored. These contains
information related to the architecture of your graph and associated
parameters, respectively.
- :option:`--framework` (string): Framework in which the model was trained.
Valid values:
- :option:`TENSORFLOW`
- :option:`MXNET`
- :option:`XLA`
- :option:`--neuroncore-pipeline-cores` (int) (default=1): Number of neuron cores
to be used in "NeuronCore Pipeline" mode. This is different from data
parallel deployment (same model on multiple neuron cores). Refer to
Runtime/Framework documentation for data parallel deployment options.
Compile for the given number of
neuron cores so as to leverage NeuronCore Pipeline mode.
.. note::
This is not used to define the number of Neuron Cores to be used in a data
parallel deployment (ie the same model on multiple Neuron Cores). That
is a runtime/framework configuration choice.
- :option:`--output` (string) (default=“out.neff”): Filename where compilation
output (NEFF archive) will be recorded.
- :option:`--io-config` (string): Configuration containing the names and shapes
of input and output tensors.
The io-config can be specified as a local filename, a URI, or a string
containing the io-config itself.
The io-config must be formatted as a JSON object with two members
“inputs” and “outputs”. “inputs” is an object mapping input tensor names
to an array of shape and data type. “outputs” is an array of output
tensor names. Consider the following example:
.. code-block:: json
{
"inputs": {
"input0:0": [[1,100,100,3], "float16"],
"input1:0": [[1,100,100,3], "float16"]
},
"outputs": ["output:0"]
}
- :option:`--enable-saturate-infinity` : Convert +/- infinity values to MAX/MIN_FLOAT for certain computations that have a high risk of generating Not-a-Number (NaN) values. There is a potential performance impact during model execution when this conversion is enabled.
- :option:`--enable-fast-loading-neuron-binaries` : Write the compilation
output (NEFF archive) in uncompressed format which results
in faster loading of the archive during inference.
- :option:`--enable-fast-context-switch` : Optimize for faster model switching
rather than inference latency. This results in overall faster system
performance when your application switches between models frequently
on the same neuron core (or set of cores). The optimization
triggered by this option for example defers loading some weight
constants until the start of inference.
- :option:`--fast-math` : Controls tradeoff between performance and accuracy for fp32 operators. See more suggestions on how to use this option with the below arguments in :ref:`neuron-cc-training-mixed-precision`.
- ``all`` (Default): enables all optimizations that improve performance. This option can potentially lower precision/accuracy.
- ``none`` : Disables all optimizations that improve performance. This option will provide best precision/accuracy.
- Tensor transpose options
- ``fast-relayout``: Only enables fast relayout optimization to improve performance by using the matrix multiplier for tensor transpose. The data type used for the transpose is either FP16 or BF16, which is controlled by the ``fp32-cast-xxx`` keyword.
- ``no-fast-relayout``: Disables fast relayout optimization which ensures that tensor transpose is bit-accurate (lossless) but slightly slower.
- Casting options
- ``fp32-cast-all`` (Default): Cast all FP32 operators to BF16 to achieve highest performance and preserve dynamic range. Same as setting ``--fp32-cast all``.
- ``fp32-cast-all-fp16``: Cast all FP32 operators to FP16 to achieve speed up and increase precision versus BF16. Same setting as ``--fp32-cast all-fp16``.
- ``fp32-cast-matmult``: Only cast FP32 operators that use Neuron Matmult engine to BF16 while using FP16 for matmult-based transpose to get better accuracy. Same as setting ``--fp32-cast matmult``.
- ``fp32-cast-matmult-bf16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to BF16 to preserve dynamic range. Same as setting ``--fp32-cast matmult-bf16``.
- ``fp32-cast-matmult-fp16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to fp16 to better preserve precision. Same as setting ``--fp32-cast matmult-fp16``.
.. important ::
* ``all`` and ``none`` are mutually exclusive
* ``all`` is equivalent to using ``fp32-cast-all fast-relayout`` (best performance)
* ``none`` is equivalent to using ``fp32-cast-matmult-bf16 no-fast-relayout`` (best accuracy)
* ``fp32-cast-*`` options are mutually exclusive
* ``fast-relayout`` and ``no-fast-relayout`` are mutually exclusive
* The ``fp32-cast-*`` and ``*-fast-relayout`` options will overwrite the default behavior in ``all`` and ``none``.
* For backward compatibility, the ``--fp32-cast`` option has higher priority over ``--fast-math``. It will overwrite the FP32 casting options in any of the ``--fast-math`` options if ``--fp32-cast`` option is present explicitly.
- :option:`--fp32-cast` : Refine the automatic casting of fp32 tensors. This is being replaced by a newer --fast-math.
.. important ::
* ``--fp32-cast`` option is being deprecated and ``--fast-math`` will replace it in future releases.
* ``--fast-math`` is introducing the ``no-fast-relayout`` option to enable lossless transpose operation.
The ``--fp32-cast`` is an interface for controlling the performance and accuracy tradeoffs. Many of the ``--fast-math`` values invoke (override) it.
- ``all`` (default): Cast all FP32 operators to BF16 to achieve speed up and preserve dynamic range.
- ``matmult``: Cast only FP32 operators that use Neuron Matmult engine to BF16 while using fp16 for matmult-based transpose to get better accuracy.
- ``matmult-fp16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to fp16 to better preserve precision.
- ``matmult-bf16``: Cast only FP32 operators that use Neuron Matmult engine (including matmult-based transpose) to BF16 to preserve dynamic range.
- ``all-fp16``: Cast all FP32 operators to FP16 to achieve speed up and better preserve precision.
**Log Levels:**
Logs at levels “trace”, “debug”, and “info” will be written to STDOUT.
Logs at levels “warn”, “error”, and “fatal” will be written to STDERR.
**Exit Status**
**0** - Compilation succeeded
**>0** - An error occurred during compilation.
**Examples**
Compiling a saved TensorFlow model:
.. code-block:: shell
neuron-cc compile test_graph_tfmatmul.pb --framework TENSORFLOW --io-config test_graph_tfmatmul.config
Compiling a MXNet model:
.. code-block:: shell
neuron-cc compile lenet-symbol.json lenet-0001.params --framework MXNET --neuroncore-pipeline-cores 2 --output file.neff
Compiling an XLA HLO:
.. code-block:: shell
neuron-cc compile bert-model.hlo --framework XLA --output file.neff
.. _neuron-cc-list-operators:
.. option:: neuron-cc list-operators [parameters]
.. _description-1:
Returns a newline ('n') separated list of operators supported by the NeuronCore.
- **TENSORFLOW**: Operators will be formatted according to the value
passed to the associated REGISTER_OP(“OperatorName”) macro.
See https://www.tensorflow.org/guide/create_op#define_the_op_interface
for more information regarding operator registration in TensorFlow.
- **MXNET**: Operator names will be formatted according to the value
passed to the associated NNVM_REGISTER_OP(operator_name) macro.
- **XLA**: Operator names will be formatted according to the value used by XLA compiler in XlaBuilder.
See https://www.tensorflow.org/xla/operation_semantics for more information regarding XLA operator semantics in XLA interface.
.. code-block:: shell
neuron-cc list-operators --framework <value>
.. _options-1:
- :option:`--framework` (string): Framework in which the operators were
registered.
Valid values:
- :option:`TENSORFLOW`
- :option:`MXNET`
- :option:`XLA`
**Exit Status**
**0** - Call succeeded
**>0** - An error occurred
**Example**
.. code-block:: shell
$ neuron-cc list-operators --framework TENSORFLOW
AddN
AdjustContrastv2
CheckNumbers
...
</pre></body></html>
|
2023-09-29T20:54:58.532Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuron-cc/misc-neuron-cc.rst.txt
|
```
Misc (neuron-cc)
================
.. toctree::
:maxdepth: 1
FAQ </compiler/neuron-cc/faq>
What's New </release-notes/compiler/neuron-cc/neuron-cc>
/release-notes/compiler/neuron-cc/neuron-cc-ops/index
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (neuron-cc)
================
.. toctree::
:maxdepth: 1
FAQ </compiler/neuron-cc/faq>
What's New </release-notes/compiler/neuron-cc/neuron-cc>
/release-notes/compiler/neuron-cc/neuron-cc-ops/index
</pre></body></html>
|
2023-09-29T20:54:58.568Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuron-cc/api-reference-guide.rst.txt
|
```
API Reference Guide
===================
.. toctree::
:maxdepth: 1
/compiler/neuron-cc/command-line-reference
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide
===================
.. toctree::
:maxdepth: 1
/compiler/neuron-cc/command-line-reference</pre></body></html>
|
2023-09-29T20:54:58.609Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/index.rst.txt
|
```
.. _neuron_c++customops:
Neuron Custom C++ Operators [Experimental]
==========================================
.. include:: /neuron-customops/customops-intro.txt
.. note::
Neuron Custom C++ Operators feature is available only starting from second generation of NeuronCore (NeuronCore-v2)
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/api-reference-guide/api-reference-guide
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/programming-guide/programming-guide
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/tutorials/tutorials
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/misc-customops
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`custom-ops-api-ref-guide`
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`feature-custom-operators-devguide`
.. dropdown:: Tutorials
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`neuronx-customop-mlp-tutorial`
* :ref:`neuronx-customop-mlp-perf`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`gpsimd-customop-tools-rn`
* :ref:`gpsimd-customop-lib-rn`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_c++customops:
Neuron Custom C++ Operators [Experimental]
==========================================
.. include:: /neuron-customops/customops-intro.txt
.. note::
Neuron Custom C++ Operators feature is available only starting from second generation of NeuronCore (NeuronCore-v2)
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/api-reference-guide/api-reference-guide
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/programming-guide/programming-guide
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/tutorials/tutorials
.. toctree::
:maxdepth: 1
:hidden:
/neuron-customops/misc-customops
.. dropdown:: API Reference Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`custom-ops-api-ref-guide`
.. dropdown:: Developer Guide
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`feature-custom-operators-devguide`
.. dropdown:: Tutorials
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`neuronx-customop-mlp-tutorial`
* :ref:`neuronx-customop-mlp-perf`
.. dropdown:: Misc
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
:open:
* :ref:`gpsimd-customop-tools-rn`
* :ref:`gpsimd-customop-lib-rn`
</pre></body></html>
|
2023-09-29T20:54:58.631Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuron-cc.rst.txt
|
```
.. _neuron-cc-index:
Neuron Compiler for Inf1
========================
.. toctree::
:maxdepth: 1
API Reference Guide </compiler/neuron-cc/api-reference-guide>
Developer Guide </compiler/neuron-cc/developer-guide>
Misc </compiler/neuron-cc/misc-neuron-cc>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-cc-index:
Neuron Compiler for Inf1
========================
.. toctree::
:maxdepth: 1
API Reference Guide </compiler/neuron-cc/api-reference-guide>
Developer Guide </compiler/neuron-cc/developer-guide>
Misc </compiler/neuron-cc/misc-neuron-cc></pre></body></html>
|
2023-09-29T20:54:58.661Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/general/appnotes/neuron-cc/mixed-precision.rst.txt
|
```
.. _neuron-cc-training-mixed-precision:
Mixed precision and performance-accuracy tuning (``neuron-cc``)
===============================================================
.. contents:: Table of contents
:local:
:depth: 2
The Neuron Compiler supports machine learning models with FP32,
FP16 and BF16 (Bfloat16) tensors and operators. The Neuron hardware supports a
mix of 32 and 16 bit datatypes.
The available auto-cast methods and their performance / accuracy trade-offs
are explained in this document.
Neuron Hardware
-------------------
The Neuron hardware supports matrix multiplication using FP16 or BF16 on its Matmult Engine, and
accumulations using FP32.
Similarly, operators such as activations or vector operations
are supported using FP16, BF16 and FP32.
Neuron supports tensor transpose in two ways - by fast matrix
multiplication in FP16/BF16 or by slower byte-by-byte data movements.
Performance-accuracy tradeoffs for models trained in FP32
---------------------------------------------------------
Models that are trained using FP32 data types can be deployed on Neuron
through ahead of time compilation using the :ref:`Neuron Compiler <neuron_cli>`.
**By default**, the Neuron Compiler will **cast all FP32 tensors,
weights and operations to BF16**. Only partial sums are left in FP32. The default, casting will generate the highest
performance for a FP32 trained model.
Using the ``--fast-math`` CLI option, you can choose the right
tradeoff between performance and accuracy. The tradeoff usually is between achieving high performance or optimal accuracy, and decision what settings to use will be application specific.
It is recommended that the you start with compiling the model to achieve the high performance (default), you can then
test the accuracy of the application and, if needed, try the next higher precision casting option until the desired
accuracy and performance are achieved. A typical flow can be:
1. You can compile without options (default) or with ``--fast-math all`` which will optimize for performance.
2. If accuracy is not sufficient you can try ``--fast-math fp32-cast-matmult``
3. If accuracy is not sufficient you can try ``--fast-math fp32-cast-matmult no-fast-relayout``
4. If accuracy is not sufficient you can try ``--fast-math none`` which will optimize for accuracy .
Between step 2 and step 3, and between step 3 and step 4 you have additional options that can provide different level of accuracy and which are explained in the below section.
Note that compiler has to preserve the input/output (i/o) tensor types requested by Framework, therefore no casting is done on the i/o tensors. Additional speedup can be obtained by casting them in the Framework prior compilation.
To learn how to use compiler command line interface (CLI) options with your application's framework, please see :ref:`torch_neuron_trace_api`, :ref:`tensorflow-ref-neuron-compile-api` and :ref:`tensorflow-ref-neuron-tracing-api`.
Compiler casting options
------------------------
``--fast-math`` option
^^^^^^^^^^^^^^^^^^^^^^^^
The ``--fast-math`` option is intended to replace the ``--fp32-cast`` option. It is recommended to
to start using or migrating to ``--fast-math`` option. The ``--fast-math`` option provides the same level of functionality
as the ``--fp32-cast`` option in addition to the following:
* The ``--fast-math`` option introduces the ``no-fast-relayout`` option to enable lossless transpose operation. This was not possible with the ``--fp32-cast`` option.
* The ``--fast-math`` option provides finer control than the ``--fp32-cast`` option. The transpose operation and the cast operation are controlled independently:
- ``no-fast-relayout`` and ``fast-relayout`` provide control for the transpose operation.
- ``fp32-cast-*`` provide control for casting.
See the detailed list of the options in :ref:`/compiler/neuron-cc/command-line-reference.rst`.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-cc-training-mixed-precision:
Mixed precision and performance-accuracy tuning (``neuron-cc``)
===============================================================
.. contents:: Table of contents
:local:
:depth: 2
The Neuron Compiler supports machine learning models with FP32,
FP16 and BF16 (Bfloat16) tensors and operators. The Neuron hardware supports a
mix of 32 and 16 bit datatypes.
The available auto-cast methods and their performance / accuracy trade-offs
are explained in this document.
Neuron Hardware
-------------------
The Neuron hardware supports matrix multiplication using FP16 or BF16 on its Matmult Engine, and
accumulations using FP32.
Similarly, operators such as activations or vector operations
are supported using FP16, BF16 and FP32.
Neuron supports tensor transpose in two ways - by fast matrix
multiplication in FP16/BF16 or by slower byte-by-byte data movements.
Performance-accuracy tradeoffs for models trained in FP32
---------------------------------------------------------
Models that are trained using FP32 data types can be deployed on Neuron
through ahead of time compilation using the :ref:`Neuron Compiler <neuron_cli>`.
**By default**, the Neuron Compiler will **cast all FP32 tensors,
weights and operations to BF16**. Only partial sums are left in FP32. The default, casting will generate the highest
performance for a FP32 trained model.
Using the ``--fast-math`` CLI option, you can choose the right
tradeoff between performance and accuracy. The tradeoff usually is between achieving high performance or optimal accuracy, and decision what settings to use will be application specific.
It is recommended that the you start with compiling the model to achieve the high performance (default), you can then
test the accuracy of the application and, if needed, try the next higher precision casting option until the desired
accuracy and performance are achieved. A typical flow can be:
1. You can compile without options (default) or with ``--fast-math all`` which will optimize for performance.
2. If accuracy is not sufficient you can try ``--fast-math fp32-cast-matmult``
3. If accuracy is not sufficient you can try ``--fast-math fp32-cast-matmult no-fast-relayout``
4. If accuracy is not sufficient you can try ``--fast-math none`` which will optimize for accuracy .
Between step 2 and step 3, and between step 3 and step 4 you have additional options that can provide different level of accuracy and which are explained in the below section.
Note that compiler has to preserve the input/output (i/o) tensor types requested by Framework, therefore no casting is done on the i/o tensors. Additional speedup can be obtained by casting them in the Framework prior compilation.
To learn how to use compiler command line interface (CLI) options with your application's framework, please see :ref:`torch_neuron_trace_api`, :ref:`tensorflow-ref-neuron-compile-api` and :ref:`tensorflow-ref-neuron-tracing-api`.
Compiler casting options
------------------------
``--fast-math`` option
^^^^^^^^^^^^^^^^^^^^^^^^
The ``--fast-math`` option is intended to replace the ``--fp32-cast`` option. It is recommended to
to start using or migrating to ``--fast-math`` option. The ``--fast-math`` option provides the same level of functionality
as the ``--fp32-cast`` option in addition to the following:
* The ``--fast-math`` option introduces the ``no-fast-relayout`` option to enable lossless transpose operation. This was not possible with the ``--fp32-cast`` option.
* The ``--fast-math`` option provides finer control than the ``--fp32-cast`` option. The transpose operation and the cast operation are controlled independently:
- ``no-fast-relayout`` and ``fast-relayout`` provide control for the transpose operation.
- ``fp32-cast-*`` provide control for casting.
See the detailed list of the options in :ref:`/compiler/neuron-cc/command-line-reference.rst`.
</pre></body></html>
|
2023-09-29T20:54:58.790Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/api-reference-guide/api-reference-guide.rst.txt
|
```
API Reference Guide
===================
.. toctree::
:maxdepth: 1
/neuron-customops/api-reference-guide/custom-ops-ref-guide
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">API Reference Guide
===================
.. toctree::
:maxdepth: 1
/neuron-customops/api-reference-guide/custom-ops-ref-guide</pre></body></html>
|
2023-09-29T20:54:58.890Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/compiler/neuron-cc/neuron-cc.rst.txt
|
```
.. _neuron-cc-rn:
Neuron Compiler (``neuron-cc``) for Inf1 Release Notes
======================================================
.. contents:: Table of contents
:local:
:depth: 1
Introduction
^^^^^^^^^^^^
This document lists the release notes for AWS Neuron compiler. The
Neuron Compiler is an ahead-of-time compiler that ensures Neuron will
optimally utilize the Inferentia chips.
Operator-support for each input format is provided directly from the
compiler.
::
neuron-cc list-operators --framework {TENSORFLOW | MXNET | XLA}
The supported operators are also listed here:
Tensorflow: :ref:`neuron-cc-ops-tensorflow`
Pytorch: :ref:`neuron-cc-ops-pytorch`
XLA: :ref:`neuron-cc-ops-xla`
Apache MXNet (Incubating): :ref:`neuron-cc-ops-mxnet`
Known issues and limitations - updated 11/23/2022
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* There is a known issue of increased latency and lower throughput when MLM head is compiled along with BERT model. The workaround is to compile them separately and feed the raw Bert into the head.
* *TensorFlow 2.x* - In this release supported operators are limited to BERT-like models, specifically no conv2d or reduce-window operators are available.
* *Control flow* Neuron only supports control flow operators which are static at compile time. For example static length RNN, top-k, sort.
* *Data layout* The Neuron compiler supports multiple data layout format (NCHW, NHWC, …). Non-CNHW input/output data-layouts will require Neuron to insert additional transpose operations, causing a degradation in performance.
* *Primary inputs in NeuronCore Pipeline mode* When a neural network is executed in NeuronCore Pipeline mode, only the first operator in a neural network can receive primary inputs from the host.
* *Reduce data type* INT8 data type is not currently supported by the Neuron compiler.
* *NeuronCore Pipeline:* NeuronCorePipeline mode provides low-latency and high-throughput for small batch sizes. We recommend to start testing with batch=1 and gradually increase batch size to fine tune your model throughput and latency performance.
* *Large input tensors* support varies by model. On some models the large input tensors (eg 1024x1024) may result in lower performance or exceeding hardware or compile-time limits, especially on models where the large input tensor is used by many downstream operators. Workarounds may include use of smaller batch, see
:ref:`neuron-batching`
* *Conv2d operator* is mapped to Inferentia except for specific cases of extremely large tensors and specific parameters.
* *Conv3d operator* performance is limited when the operator has small number of input channels (< 64).
* FP64 and INT64 input and output tensors are not supported. Please cast to FP32/INT32 in the machine learning framework, prior compiling for Neuron.
Neuron Compiler release [1.19.0.0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 09/15/2023
* Minor bug fixes.
Neuron Compiler release [1.17.0.0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
New in this release
-------------------
* This release introduces a new ``--enable-saturate-infinity`` compiler option. A computation that can generate +/- infinity is at a high risk of generating Not-a-Number (NaN) values when the infinity value is used in subsequent computations. This option helps avoid this by converting +Inf/-Inf values to MAX/MIN_FLOAT before operations that could produce NaN values for +Inf/-Inf inputs on the target architecture. While this option helps to avoid NaN values, there is a potential performance degradation that occurs during model execution when this conversion is enabled.
* Minor bug fixes.
Neuron Compiler release [1.16.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
* Minor bug fixes.
Neuron Compiler release [1.15.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 05/01/2023
* Minor bug fixes.
Neuron Compiler release [1.14.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/19/2023
* Minor bug fixes.
Neuron Compiler release [1.13.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
* Resolved long compile-times when compiling the YOLOv5 and YOLOv6 models. [GitHub · aws-neuron-sdk · #434]
* Improved the layout algorithm to resolve an issue compiling a transformer-based text recognition model. [GitHub · aws-neuron-sdk · #410]
* Support was added for additional XLA operators
Neuron Compiler release [1.11.7.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 08/02/2022
* Fixed a bug for correct handling of mxnet dropout instruction when mode is set as 'training' while performing inference.
Neuron Compiler release [1.11.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Solved an issue that caused a "false positive" reporting of a data race that may occur due to address overlap.
* Minor bug fixes.
Neuron Compiler release [1.10.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Minor bug fixes.
Neuron Compiler release [1.9.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
* Fixed an issue with frontend compiler for fused operators that was reported in `github #362 <https://github.com/aws/aws-neuron-sdk/issues/362>`_.
Neuron Compiler release [1.8.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/05/2022
New in this release
-------------------
* Minor bug fixes.
Neuron Compiler release [1.8.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 12/15/2021
New in this release
-------------------
* Performance enhancements as a result of improved layout and DMA optimizations.
* Minor bug fixes.
Neuron Compiler release [1.7.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 10/27/2021
New in this release
-------------------
* The compiler’s list-operators command can now display the supported TensorFlow 2.x operators.
* Support added for new operators in TensorFlow 1.x - ArgMax and ArgMin.
* Introducing the ``–-fast-math`` option for better fine-tuning of accuracy/performance. See :ref:`neuron-cc-training-mixed-precision`
[1.6.13.0]
^^^^^^^^^^
Date 08/12/2021
New in this release
-------------------
* TensorFlow 2.x - First support of TensorFlow 2.x. The support is limited to operators in BERT-like models and was tested with Huggingface BERT small, base, large and DistillBert.
Resolved issues
---------------
* Fixed compiler backend issue in Tensor_tensor argument distance, `github #269 <https://github.com/aws/aws-neuron-sdk/issues/269>`_
[1.5.5.0]
^^^^^^^^^
Date 07/02/2021
Summary
-------
- Robustness and performance improvements.
New in this release
-------------------
* Added ``--enable-fast-context-switch`` option to optimize for faster model switching rather than inference latency.
* Deprecated support for ONNX
* Improved robustness of Conv3d
* Corrected compilation error "too many instructions" in DLRM model
[1.4.0.0]
^^^^^^^^^
Date 5/28/2021
Summary
-------
- Performance improvements, and usability improvements.
New in this release
-------------------
* Added uncompressed NEFF format for faster loading models prior inference. Enable it by –enable-fast-loading-neuron-binaries. Some cases of large models may be detrminentally impacted as it will not be compressed but many cases will benefit.
* Corrected compilation error in specific arguments of ResizeBilinear operator
[1.3.0.0]
^^^^^^^^^
Date 4/30/2021
Summary
-------
- Performance improvements, new operators, and usability improvements.
New in this release
-------------------
- Improved performance of batched CNN models like resnet50 with the default compiler options by 10%.
- Improved performance of bert base sequence 128 batch 6 by upto 16%
- Added support for group and depth wise convolution (with limited performance when the number of input channels is small).
- Added more detailed debug names to support for tensorboard.
Resolved Issues
---------------
- Corrected potential race condition in overwriting tiles of output tensors.
- Fixed various issues in pipelined inference by enabling fine grain partitioning by default.
[1.2.7.0]
^^^^^^^^^
Date 2/24/2021
Summary
-------
Fix for CVE-2021-3177.
[1.2.2.0]
^^^^^^^^^
Date 1/30/2021
Summary
-------
Added suport for multiple new operators (see operators list) for Tensoflow and MXNET. Improved inference performance of language, object recognition models on single as well as multiple pipelined cores using neuroncore-pipeline.
New in this release
-------------------
- The following models are now supported: Resnext 224x224, specific BERT variations applied to natural language processing and translation.
- A number of new operators is now supported on Inferentia, see the full lists :ref:`neuron-cc-ops-tensorflow`
and :ref:`neuron-cc-ops-mxnet`
- Improved inference performance on yolov4 BERT base sequence 64 (on 16 pipelined cores) and openpose 184.
Resolved Issues
---------------
- Corrected a random failure to compile Resnet50 batch 5
- Corrected numerical inaccuracy in RSQRT and related operators for tensors with very large values ( > 1e20)
[1.1.7.0]
^^^^^^^^^
Date 12/23/2020
Summary
-------
Added suport for PyTorch Yolo V4, a new Framework-visible progress bar and improved inference performance. We continue to streamline the compiler usability by removing the need for options passed to control behavior. We are aiming to remove the need for such options entirely. Some tutorials have been updated to reflect this, but Resnet50 remains in need of these options to achieve maximum performance. Other useability improvements have been added, such as the compiler progress bar. As always, please let us know if there are other areas that we can improve.
New in this release
-------------------
- Pytorch Yolo V4 is now supported.
- Added a compiler progress bar when compilation is invoked from the Framework. This allows the user to see that progress continues as compilation proceeds, which is useful when compilation takes several minutes. A dot is printed every 20 seconds.
- Improved inference performance of Tensorflow BERT base seq 256 batch 3 by 10% .
Resolved Issues
---------------
- Resolved issue with depthwise convolution that manifests as a type check error
.. _10240450:
[1.0.24045.0]
^^^^^^^^^^^^^
Date 11/17/2020
Summary
-------
Improved performance for pipelined execution (NeuronCore Pipeline).
New in this release
-------------------
- NeuronCore Pipeline: improved partitioning to enable better static
weights loading to cache.
Resolved Issues
---------------
- --static-weights : No longer needed. As this is shown in some
examples, please remove the option since the compiler now performs
this auto-detection by default.
- --num-neuroncores renamed to --neuroncore-pipeline-cores. The prior
option form is still functional (backwards compatible) and will be
removed in future releases.
- --batching_en: Resolved compilation failure of ResNet50 FP32 batch 1
on Ubuntu16 when "--batching_en" was used.
.. _neuron-cc-10206000:
[1.0.20600.0]
^^^^^^^^^^^^^
Date 9/22/2020
Summary
-------
Various performance improvements - both compilation time and inference
speed of object recognition models.
- Compiler optimization '-O2' option is now enabled by default.
.. _major-new-features-1:
New in this release
-------------------
- Improved inference performance of YOLO v3, YOLO v4, VGG16, SSD300.
BERT models were improved by an additional 10%.
- Modifed such that -O2 is now the default behavior and does not need
to be specified. Note: some tutorials still explicitly specify "-O2".
These will be modified in forthcoming updates.
.. _resolved-issues-1:
Resolved Issues
---------------
- Sped up compilation of large models that were taking hours to sub-40
minute.
.. _neuron-cc-10180010:
[1.0.18001.0]
^^^^^^^^^^^^^
Date 8/08/2020
.. _summary-1:
Summary
-------
Various performance improvements.
.. _major-new-features-1:
New in this release
-------------------
Improved performance of BERT base with -O2
.. _resolved-issues-1:
Resolved Issues
---------------
- n/a
.. _neuron-cc-10179370:
[1.0.17937.0]
^^^^^^^^^^^^^
Date 8/05/2020
.. _summary-2:
Summary
-------
Various improvements.
.. _neuron-cc-10168610:
[1.0.16861.0]
^^^^^^^^^^^^^
Date 7/16/2020
.. _summary-3:
Summary
-------
This release has some bug fixes and some functional and performance
improvements to support compilation of several neural networks.
.. _major-new-features-2:
New in this release
-------------------
This release
- Supports compilation of PoseNet, tested for images of specific
resolutions upto 736.
- Update the -O2 with a new memory allocator to reduce spilling to DRAM
- Improved performance of the '-O2' on BERT base, and openpose pose
network.
.. _resolved-issues-2:
Resolved Issues
---------------
- Resolved compilation error in Vgg16 batch 1
Other Notes
-----------
- Some versions of Inception network may fail to compile in Tensorflow
on Ubuntu 16 in conda environment. The symptom is neuron-cc backend
data race error. As a workaround use Ubuntu 18, Amazon Linux 2, or
virtual env, or use neuron-cc with flag -O2.
.. warning::
:ref:`Starting with Neuron 1.14.0, Ubuntu 16 is no longer supported <eol-ubuntu16>`
.. _neuron-cc-10152750:
[1.0.15275.0]
^^^^^^^^^^^^^
Date 6/11/2020
.. _summary-4:
Summary
-------
This release has some bug fixes and some functional and performance
improvements to support compilation of several neural networks.
.. _major-new-features-3:
New in this release
-------------------
This release
- Supports compilation of PoseNet for images of specific resolutions
upto 400x400.
- Improves performance of resnet152.
- Supports a new command line option '-O2' that can help with handling
of large tensor inputs for certain models.
- increase NEFF versions to 1.0. This means new NEFFs compiled from
this release forward are not compatible with older versions of Neuron
Runtime prior to May, 2020 (1.0.6905.0) release. Please update the
Neuron Runtime when using NEFF version 1.0.
.. _resolved-issues-3:
Resolved Issues
---------------
- Compilation issues on prosotron encoder, decoder neural networks.
.. _other-notes-1:
Other Notes
-----------
Dependencies
------------
- This version creates NEFF 1.0 thus may require update of neuron-rtd
if older than May 2020 release.
dmlc_nnvm==1.0.2574.0 dmlc_topi==1.0.2574.0 dmlc_tvm==1.0.2574.0
inferentia_hwm==1.0.1362.0 islpy==2018.2
.. _neuron-cc-10126960:
[1.0.12696.0]
^^^^^^^^^^^^^
Date 5/11/2020
.. _summary-5:
Summary
-------
Bug fixes and some functional and performance improvements to several
neural networks.
.. _major-new-features-4:
New in this release
-------------------
- This version supports compilation of unmodified Tensorflow BERT with
batch size 1, 4, 6 for input sequence 128.
- Improved Tensorflow BERT batch 4 sequence 128 performance to 45% of
the accelerator peak (from 34%).
- Support for MXNET BERT base batch 8 compilation
- Support for TF Resnet152 batch 2 compilation
- Most compiler messages are migrated from cout to logging mechanisms
with verbosity control
.. _resolved-issues-4:
Resolved Issues
---------------
- Fixed failure to compile unmodified Tensorflow BERT model for small
batches
- Fixed run-to-run-variability in OneHot operator implementation
- Robustness improvements for ParallelWavenet and transformer decoder
networks
.. _other-notes-2:
Other Notes
-----------
.. _dependencies-1:
Dependencies
------------
::
dmlc_nnvm==1.0.2356.0
dmlc_topi==1.0.2356.0
dmlc_tvm==1.0.2356.0
inferentia_hwm==1.0.1294.0
islpy==2018.2
.. _neuron-cc-1094100:
[1.0.9410.0]
^^^^^^^^^^^^
Date 3/26/2020
.. _summary-6:
Summary
-------
Bug fixes and some functional and performance improvements to several
neural networks.
.. _major-new-features-5:
New in this release
-------------------
- Support compilation of modified SSD-300
(:ref:`tensorflow-ssd300`)
- Improved inference performance in natural language processing
networks (such as prosotron encoder) by 45%
.. _resolved-issues-5:
Resolved Issues
---------------
- Eliminated redundant fp32 to bfloat16 cast on input and output
tensors
Known issues and limitations
----------------------------
- See previous releases.
.. _other-notes-3:
Other Notes
-----------
- Added support for faster iteration on recurrent networks (aka
auto-loop)
.. _dependencies-2:
Dependencies
------------
::
dmlc_nnvm==1.0.2049.0
dmlc_topi==1.0.2049.0
pip install --upgrade dmlc_tvm==1.0.2049.0
inferentia_hwm==1.0.897.0
islpy==2018.2
.. _neuron-cc-1078780:
[1.0.7878.0]
^^^^^^^^^^^^
Date 2/27/2020
.. _summary-7:
Summary
-------
Bug fixes and minor performance improvements.
.. _major-new-features-6:
New in this release
-------------------
None
.. _resolved-issues-6:
Resolved Issues
---------------
- Corrected image resize operator functionallity
- Compiler internal enhancements made that will benefit models such as
BERT
.. _known-issues-and-limitations-1:
Known issues and limitations
----------------------------
- See previous releases.
.. _other-notes-4:
Other Notes
-----------
.. _dependencies-3:
Dependencies
------------
::
dmlc_nnvm-1.0.1826.0
dmlc_topi-1.0.1826.0
dmlc_tvm-1.0.1826.0
inferentia_hwm-1.0.897.0
islpy-2018.2
.. _neuron-cc-1068010:
[1.0.6801.0]
^^^^^^^^^^^^
Date 1/27/2020
.. _summary-8:
Summary
-------
Bug fixes and some performance enhancement related to data movement for
BERT-type neural networks.
.. _major-new-features-7:
New in this release
-------------------
None
.. _resolved-issues-7:
Resolved Issues
---------------
- Improved throughput for operators processed in the Neuron Runtime
CPU. As an example: execution of 4 single NeuronCore NEFF models of
ResNet50 v2 float16 batch = 5 in parallel on an inf1.1xlarge sped up
by 30%.
- Corrected shape handling in Gather(TensorFlow)/Take(MXNet) operators
that are processed by the Neuron Runtime in the Neuron Runtime vCPU,
which resolves a possible crash in Neuron Compiler when compiling
models with these operators with some shapes.
- Added support for TensorFlow *OneHot* operator (as a Neuron Runtime
CPU operator).
- Added more internal checking for compiler correctness with newly
defined error messages for this case.
::
“Internal ERROR: Data race between Op1 'Name1(...) [...]' and Op2 'Name2(...) [...]'”
- Fixed out-of-memory issue introduced in 1.0.5939.0 such that some
large models (BERT) compiled on instances with insufficient host
memory would cause the runtime to crash with an invalid NEFF. This is
actually a compiler error, but due to additional script layers
wrapping this in the :ref:`tensorflow-bert-demo`, this would
have likely been seen as a runtime error like this:
.. code:: bash
2020-01-09 13:40:26.002594: E tensorflow/core/framework/op_segment.cc:54] Create kernel failed: Invalid argument: neff is invalid
2020-01-09 13:40:26.002637: E tensorflow/core/common_runtime/executor.cc:642] Executor failed to create kernel. Invalid argument: neff is invalid
[[{{node bert/NeuronOp}}]]
.. _known-issues-and-limitations-2:
Known issues and limitations
----------------------------
See previous release notes. Some tutorials show use of specific compiler
options and flags, these are needed to help provide guidance to the
compiler to achieve best performance in specific cases. Please do not
use in cases other than as shown in the specific tutorial as results may
not be defined. These options should be considered experimental and will
be removed over time.
.. _other-notes-5:
Other Notes
-----------
.. _dependencies-4:
Dependencies
------------
::
dmlc_nnvm-1.0.1619.0
dmlc_topi-1.0.1619.0
dmlc_tvm-1.0.1619.0
inferentia_hwm-1.0.839.0
islpy-2018.2
.. _1059390:
[1.0.5939.0]
^^^^^^^^^^^^
Date 12/20/2019
.. _summary-9:
Summary
-------
Bug fixes and some performance enhancement for NeuronCore Pipeline.
.. _major-new-features-8:
New in this release
-------------------
.. _resolved-issues-8:
Resolved Issues
---------------
- Fixed pipeline execution on more than 10 NeuronCores
- Improved NeuronCores Pipeline execution by improving data exchange
efficiency between NeuronCores
- Added warning for unaligned memory access
- Fixed handling of cast on input FP32 tensor
- Improved handling of data layouts and transpose
- Improved dead-code elimination
- Improved efficiency of compute engine synchronization
- Improved efficiency of data transfers within the Neuron code
.. _known-issues-and-limitations-3:
Known issues and limitations
----------------------------
See previous release notes. Some tutorials show use of specific compiler
options and flags, these are needed to help provide guidance to the
compiler to achieve best performance in specific cases. Please do not
use in cases other than as shown in the specific tutorial as results may
not be defined. These options should be considered experimental and will
be removed over time.
.. _other-notes-6:
Other Notes
-----------
.. _dependencies-5:
Dependencies
------------
- dmlc_nnvm-1.0.1416.0
- dmlc_topi-1.0.1416.0
- dmlc_tvm-1.0.1416.0
- inferentia_hwm-1.0.720.0
- islpy-2018.2
.. _1053010:
[1.0.5301.0]
^^^^^^^^^^^^
Date 12/1/2019
.. _summary-10:
Summary
-------
.. _major-new-features-9:
New in this release
-------------------
.. _resolved-issues-9:
Resolved Issues
---------------
- Added warning for unsupported operators and convolution sizes
- Added warning for unsupported layout / upsampling
- Added support for Relu6, AddV2, BatchMatmulV2 operators
- Added support for default MXNet outputs in –io-config
- Improved performance of batched inference for convolutional networks
- Fixed MatMult column size 1
- Fixed bf16 constant loading
- Fixed Conv2D tile accumulation
.. _known-issues-and-limitations-4:
Known Issues and Limitations
----------------------------
See previous release notes. Resolved issues are shown in Resolved
Issues.
.. _other-notes-7:
Other Notes
-----------
Please install g++ on AMIs without g++ pre-installed (i.e. server AMIs):
.. code:: bash
# Ubuntu
sudo apt-get install -y g++
.. code:: bash
# Amazon Linux
sudo yum nstall -y gcc-c++
Supported Python versions:
- 3.5, 3.6, 3.7
Supported Linux distributions:
- Ubuntu 16, Ubuntu 18, Amazon Linux 2
.. _dependencies-6:
Dependencies
------------
- dmlc_nnvm-1.0.1328.0
- dmlc_topi-1.0.1328.0
- dmlc_tvm-1.0.1328.0
- inferentia_hwm-1.0.674.0
- islpy-2018.2
.. _1046800:
[1.0.4680.0]
^^^^^^^^^^^^
Date: 11/25/2019
.. _major-new-features-10:
New in this release
-------------------
N/A, this is the first release.
.. _resolved-issues-10:
Resolved issues
---------------
N/A, this is the first release.
.. _known-issues-and-limitations-5:
Known issues and limitations
----------------------------
1. **Control flow** Inferentia has a limited support for control flow.
In general, Neuron can only support control flow operators which are
static at compile time, i.e. static length RNN, top-k, sort, ...
2. **Size of neural network** The size of neural network is influenced
by a) type of neural network (CNN, LSTM, MLP) , b) number of layers,
c) sizes of input (dimension of the tensors, batch size, ...). The
current Neuron compiler release has a limitation in terms of the size
of neural network it could effectively optimize. As a result, we
limit CNN models (e.g. ResNet) to have an input size of up to 480x480
FP16, batch size of 4; LSTM models (e.g. GNMT) are limited to a time
step limit of up to 900; MLP models (like BERT) are limited up to
sequence-length equal 128, batch=8.
3. **Data layout** The Neuron compiler supports multiple data layout
formats (NCHW, NHWC, ...). Non-CNHW input/output data-layouts will
require Neuron to insert additional *transpose* operations, causing a
degradation in performance.
4. **Object detection models** Computer-vision object detection and
segmentation models are not supported by the current release.
5. **Reduce data type** INT8 data type is not currently supported by the
Neuron compiler.
6. **Tensor residency** When a sub-graph that is executed on the host is
communicating with a sub-graph that is executing on Neuron cores,
tensors are copied via the communication queues between the host and
Inferentia memory for each inference, which may result in end-to-end
performance degradation.
7. **Primary inputs in NeuronCore Pipeline mode** When a neural network
is executed in NeuronCore Pipeline mode, only the first operator in a
neural network can receive primary inputs from the host.
.. _other-notes-8:
Other Notes
-----------
.. _dependencies-7:
Dependencies
------------
- nnvm: dmlc_nnvm-1.0.1219.0
- topi: dmlc_topi-1.0.1219.0
- tvm: dmlc_tvm-1.0.1219.0
- hwm: inferentia_hwm-1.0.602.0
- islpy: islpy-2018.2+aws2018.x.73.0
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-cc-rn:
Neuron Compiler (``neuron-cc``) for Inf1 Release Notes
======================================================
.. contents:: Table of contents
:local:
:depth: 1
Introduction
^^^^^^^^^^^^
This document lists the release notes for AWS Neuron compiler. The
Neuron Compiler is an ahead-of-time compiler that ensures Neuron will
optimally utilize the Inferentia chips.
Operator-support for each input format is provided directly from the
compiler.
::
neuron-cc list-operators --framework {TENSORFLOW | MXNET | XLA}
The supported operators are also listed here:
Tensorflow: :ref:`neuron-cc-ops-tensorflow`
Pytorch: :ref:`neuron-cc-ops-pytorch`
XLA: :ref:`neuron-cc-ops-xla`
Apache MXNet (Incubating): :ref:`neuron-cc-ops-mxnet`
Known issues and limitations - updated 11/23/2022
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* There is a known issue of increased latency and lower throughput when MLM head is compiled along with BERT model. The workaround is to compile them separately and feed the raw Bert into the head.
* *TensorFlow 2.x* - In this release supported operators are limited to BERT-like models, specifically no conv2d or reduce-window operators are available.
* *Control flow* Neuron only supports control flow operators which are static at compile time. For example static length RNN, top-k, sort.
* *Data layout* The Neuron compiler supports multiple data layout format (NCHW, NHWC, …). Non-CNHW input/output data-layouts will require Neuron to insert additional transpose operations, causing a degradation in performance.
* *Primary inputs in NeuronCore Pipeline mode* When a neural network is executed in NeuronCore Pipeline mode, only the first operator in a neural network can receive primary inputs from the host.
* *Reduce data type* INT8 data type is not currently supported by the Neuron compiler.
* *NeuronCore Pipeline:* NeuronCorePipeline mode provides low-latency and high-throughput for small batch sizes. We recommend to start testing with batch=1 and gradually increase batch size to fine tune your model throughput and latency performance.
* *Large input tensors* support varies by model. On some models the large input tensors (eg 1024x1024) may result in lower performance or exceeding hardware or compile-time limits, especially on models where the large input tensor is used by many downstream operators. Workarounds may include use of smaller batch, see
:ref:`neuron-batching`
* *Conv2d operator* is mapped to Inferentia except for specific cases of extremely large tensors and specific parameters.
* *Conv3d operator* performance is limited when the operator has small number of input channels (< 64).
* FP64 and INT64 input and output tensors are not supported. Please cast to FP32/INT32 in the machine learning framework, prior compiling for Neuron.
Neuron Compiler release [1.19.0.0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 09/15/2023
* Minor bug fixes.
Neuron Compiler release [1.17.0.0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 7/19/2023
New in this release
-------------------
* This release introduces a new ``--enable-saturate-infinity`` compiler option. A computation that can generate +/- infinity is at a high risk of generating Not-a-Number (NaN) values when the infinity value is used in subsequent computations. This option helps avoid this by converting +Inf/-Inf values to MAX/MIN_FLOAT before operations that could produce NaN values for +Inf/-Inf inputs on the target architecture. While this option helps to avoid NaN values, there is a potential performance degradation that occurs during model execution when this conversion is enabled.
* Minor bug fixes.
Neuron Compiler release [1.16.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 6/14/2023
* Minor bug fixes.
Neuron Compiler release [1.15.0.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 05/01/2023
* Minor bug fixes.
Neuron Compiler release [1.14.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/19/2023
* Minor bug fixes.
Neuron Compiler release [1.13.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 11/23/2022
* Resolved long compile-times when compiling the YOLOv5 and YOLOv6 models. [GitHub · aws-neuron-sdk · #434]
* Improved the layout algorithm to resolve an issue compiling a transformer-based text recognition model. [GitHub · aws-neuron-sdk · #410]
* Support was added for additional XLA operators
Neuron Compiler release [1.11.7.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 08/02/2022
* Fixed a bug for correct handling of mxnet dropout instruction when mode is set as 'training' while performing inference.
Neuron Compiler release [1.11.4.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 04/29/2022
* Solved an issue that caused a "false positive" reporting of a data race that may occur due to address overlap.
* Minor bug fixes.
Neuron Compiler release [1.10.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 03/25/2022
* Minor bug fixes.
Neuron Compiler release [1.9.1.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/20/2022
* Fixed an issue with frontend compiler for fused operators that was reported in `github #362 <https://github.com/aws/aws-neuron-sdk/issues/362>`_.
Neuron Compiler release [1.8.5.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 01/05/2022
New in this release
-------------------
* Minor bug fixes.
Neuron Compiler release [1.8.2.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 12/15/2021
New in this release
-------------------
* Performance enhancements as a result of improved layout and DMA optimizations.
* Minor bug fixes.
Neuron Compiler release [1.7.3.0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Date: 10/27/2021
New in this release
-------------------
* The compiler’s list-operators command can now display the supported TensorFlow 2.x operators.
* Support added for new operators in TensorFlow 1.x - ArgMax and ArgMin.
* Introducing the ``–-fast-math`` option for better fine-tuning of accuracy/performance. See :ref:`neuron-cc-training-mixed-precision`
[1.6.13.0]
^^^^^^^^^^
Date 08/12/2021
New in this release
-------------------
* TensorFlow 2.x - First support of TensorFlow 2.x. The support is limited to operators in BERT-like models and was tested with Huggingface BERT small, base, large and DistillBert.
Resolved issues
---------------
* Fixed compiler backend issue in Tensor_tensor argument distance, `github #269 <https://github.com/aws/aws-neuron-sdk/issues/269>`_
[1.5.5.0]
^^^^^^^^^
Date 07/02/2021
Summary
-------
- Robustness and performance improvements.
New in this release
-------------------
* Added ``--enable-fast-context-switch`` option to optimize for faster model switching rather than inference latency.
* Deprecated support for ONNX
* Improved robustness of Conv3d
* Corrected compilation error "too many instructions" in DLRM model
[1.4.0.0]
^^^^^^^^^
Date 5/28/2021
Summary
-------
- Performance improvements, and usability improvements.
New in this release
-------------------
* Added uncompressed NEFF format for faster loading models prior inference. Enable it by –enable-fast-loading-neuron-binaries. Some cases of large models may be detrminentally impacted as it will not be compressed but many cases will benefit.
* Corrected compilation error in specific arguments of ResizeBilinear operator
[1.3.0.0]
^^^^^^^^^
Date 4/30/2021
Summary
-------
- Performance improvements, new operators, and usability improvements.
New in this release
-------------------
- Improved performance of batched CNN models like resnet50 with the default compiler options by 10%.
- Improved performance of bert base sequence 128 batch 6 by upto 16%
- Added support for group and depth wise convolution (with limited performance when the number of input channels is small).
- Added more detailed debug names to support for tensorboard.
Resolved Issues
---------------
- Corrected potential race condition in overwriting tiles of output tensors.
- Fixed various issues in pipelined inference by enabling fine grain partitioning by default.
[1.2.7.0]
^^^^^^^^^
Date 2/24/2021
Summary
-------
Fix for CVE-2021-3177.
[1.2.2.0]
^^^^^^^^^
Date 1/30/2021
Summary
-------
Added suport for multiple new operators (see operators list) for Tensoflow and MXNET. Improved inference performance of language, object recognition models on single as well as multiple pipelined cores using neuroncore-pipeline.
New in this release
-------------------
- The following models are now supported: Resnext 224x224, specific BERT variations applied to natural language processing and translation.
- A number of new operators is now supported on Inferentia, see the full lists :ref:`neuron-cc-ops-tensorflow`
and :ref:`neuron-cc-ops-mxnet`
- Improved inference performance on yolov4 BERT base sequence 64 (on 16 pipelined cores) and openpose 184.
Resolved Issues
---------------
- Corrected a random failure to compile Resnet50 batch 5
- Corrected numerical inaccuracy in RSQRT and related operators for tensors with very large values ( > 1e20)
[1.1.7.0]
^^^^^^^^^
Date 12/23/2020
Summary
-------
Added suport for PyTorch Yolo V4, a new Framework-visible progress bar and improved inference performance. We continue to streamline the compiler usability by removing the need for options passed to control behavior. We are aiming to remove the need for such options entirely. Some tutorials have been updated to reflect this, but Resnet50 remains in need of these options to achieve maximum performance. Other useability improvements have been added, such as the compiler progress bar. As always, please let us know if there are other areas that we can improve.
New in this release
-------------------
- Pytorch Yolo V4 is now supported.
- Added a compiler progress bar when compilation is invoked from the Framework. This allows the user to see that progress continues as compilation proceeds, which is useful when compilation takes several minutes. A dot is printed every 20 seconds.
- Improved inference performance of Tensorflow BERT base seq 256 batch 3 by 10% .
Resolved Issues
---------------
- Resolved issue with depthwise convolution that manifests as a type check error
.. _10240450:
[1.0.24045.0]
^^^^^^^^^^^^^
Date 11/17/2020
Summary
-------
Improved performance for pipelined execution (NeuronCore Pipeline).
New in this release
-------------------
- NeuronCore Pipeline: improved partitioning to enable better static
weights loading to cache.
Resolved Issues
---------------
- --static-weights : No longer needed. As this is shown in some
examples, please remove the option since the compiler now performs
this auto-detection by default.
- --num-neuroncores renamed to --neuroncore-pipeline-cores. The prior
option form is still functional (backwards compatible) and will be
removed in future releases.
- --batching_en: Resolved compilation failure of ResNet50 FP32 batch 1
on Ubuntu16 when "--batching_en" was used.
.. _neuron-cc-10206000:
[1.0.20600.0]
^^^^^^^^^^^^^
Date 9/22/2020
Summary
-------
Various performance improvements - both compilation time and inference
speed of object recognition models.
- Compiler optimization '-O2' option is now enabled by default.
.. _major-new-features-1:
New in this release
-------------------
- Improved inference performance of YOLO v3, YOLO v4, VGG16, SSD300.
BERT models were improved by an additional 10%.
- Modifed such that -O2 is now the default behavior and does not need
to be specified. Note: some tutorials still explicitly specify "-O2".
These will be modified in forthcoming updates.
.. _resolved-issues-1:
Resolved Issues
---------------
- Sped up compilation of large models that were taking hours to sub-40
minute.
.. _neuron-cc-10180010:
[1.0.18001.0]
^^^^^^^^^^^^^
Date 8/08/2020
.. _summary-1:
Summary
-------
Various performance improvements.
.. _major-new-features-1:
New in this release
-------------------
Improved performance of BERT base with -O2
.. _resolved-issues-1:
Resolved Issues
---------------
- n/a
.. _neuron-cc-10179370:
[1.0.17937.0]
^^^^^^^^^^^^^
Date 8/05/2020
.. _summary-2:
Summary
-------
Various improvements.
.. _neuron-cc-10168610:
[1.0.16861.0]
^^^^^^^^^^^^^
Date 7/16/2020
.. _summary-3:
Summary
-------
This release has some bug fixes and some functional and performance
improvements to support compilation of several neural networks.
.. _major-new-features-2:
New in this release
-------------------
This release
- Supports compilation of PoseNet, tested for images of specific
resolutions upto 736.
- Update the -O2 with a new memory allocator to reduce spilling to DRAM
- Improved performance of the '-O2' on BERT base, and openpose pose
network.
.. _resolved-issues-2:
Resolved Issues
---------------
- Resolved compilation error in Vgg16 batch 1
Other Notes
-----------
- Some versions of Inception network may fail to compile in Tensorflow
on Ubuntu 16 in conda environment. The symptom is neuron-cc backend
data race error. As a workaround use Ubuntu 18, Amazon Linux 2, or
virtual env, or use neuron-cc with flag -O2.
.. warning::
:ref:`Starting with Neuron 1.14.0, Ubuntu 16 is no longer supported <eol-ubuntu16>`
.. _neuron-cc-10152750:
[1.0.15275.0]
^^^^^^^^^^^^^
Date 6/11/2020
.. _summary-4:
Summary
-------
This release has some bug fixes and some functional and performance
improvements to support compilation of several neural networks.
.. _major-new-features-3:
New in this release
-------------------
This release
- Supports compilation of PoseNet for images of specific resolutions
upto 400x400.
- Improves performance of resnet152.
- Supports a new command line option '-O2' that can help with handling
of large tensor inputs for certain models.
- increase NEFF versions to 1.0. This means new NEFFs compiled from
this release forward are not compatible with older versions of Neuron
Runtime prior to May, 2020 (1.0.6905.0) release. Please update the
Neuron Runtime when using NEFF version 1.0.
.. _resolved-issues-3:
Resolved Issues
---------------
- Compilation issues on prosotron encoder, decoder neural networks.
.. _other-notes-1:
Other Notes
-----------
Dependencies
------------
- This version creates NEFF 1.0 thus may require update of neuron-rtd
if older than May 2020 release.
dmlc_nnvm==1.0.2574.0 dmlc_topi==1.0.2574.0 dmlc_tvm==1.0.2574.0
inferentia_hwm==1.0.1362.0 islpy==2018.2
.. _neuron-cc-10126960:
[1.0.12696.0]
^^^^^^^^^^^^^
Date 5/11/2020
.. _summary-5:
Summary
-------
Bug fixes and some functional and performance improvements to several
neural networks.
.. _major-new-features-4:
New in this release
-------------------
- This version supports compilation of unmodified Tensorflow BERT with
batch size 1, 4, 6 for input sequence 128.
- Improved Tensorflow BERT batch 4 sequence 128 performance to 45% of
the accelerator peak (from 34%).
- Support for MXNET BERT base batch 8 compilation
- Support for TF Resnet152 batch 2 compilation
- Most compiler messages are migrated from cout to logging mechanisms
with verbosity control
.. _resolved-issues-4:
Resolved Issues
---------------
- Fixed failure to compile unmodified Tensorflow BERT model for small
batches
- Fixed run-to-run-variability in OneHot operator implementation
- Robustness improvements for ParallelWavenet and transformer decoder
networks
.. _other-notes-2:
Other Notes
-----------
.. _dependencies-1:
Dependencies
------------
::
dmlc_nnvm==1.0.2356.0
dmlc_topi==1.0.2356.0
dmlc_tvm==1.0.2356.0
inferentia_hwm==1.0.1294.0
islpy==2018.2
.. _neuron-cc-1094100:
[1.0.9410.0]
^^^^^^^^^^^^
Date 3/26/2020
.. _summary-6:
Summary
-------
Bug fixes and some functional and performance improvements to several
neural networks.
.. _major-new-features-5:
New in this release
-------------------
- Support compilation of modified SSD-300
(:ref:`tensorflow-ssd300`)
- Improved inference performance in natural language processing
networks (such as prosotron encoder) by 45%
.. _resolved-issues-5:
Resolved Issues
---------------
- Eliminated redundant fp32 to bfloat16 cast on input and output
tensors
Known issues and limitations
----------------------------
- See previous releases.
.. _other-notes-3:
Other Notes
-----------
- Added support for faster iteration on recurrent networks (aka
auto-loop)
.. _dependencies-2:
Dependencies
------------
::
dmlc_nnvm==1.0.2049.0
dmlc_topi==1.0.2049.0
pip install --upgrade dmlc_tvm==1.0.2049.0
inferentia_hwm==1.0.897.0
islpy==2018.2
.. _neuron-cc-1078780:
[1.0.7878.0]
^^^^^^^^^^^^
Date 2/27/2020
.. _summary-7:
Summary
-------
Bug fixes and minor performance improvements.
.. _major-new-features-6:
New in this release
-------------------
None
.. _resolved-issues-6:
Resolved Issues
---------------
- Corrected image resize operator functionallity
- Compiler internal enhancements made that will benefit models such as
BERT
.. _known-issues-and-limitations-1:
Known issues and limitations
----------------------------
- See previous releases.
.. _other-notes-4:
Other Notes
-----------
.. _dependencies-3:
Dependencies
------------
::
dmlc_nnvm-1.0.1826.0
dmlc_topi-1.0.1826.0
dmlc_tvm-1.0.1826.0
inferentia_hwm-1.0.897.0
islpy-2018.2
.. _neuron-cc-1068010:
[1.0.6801.0]
^^^^^^^^^^^^
Date 1/27/2020
.. _summary-8:
Summary
-------
Bug fixes and some performance enhancement related to data movement for
BERT-type neural networks.
.. _major-new-features-7:
New in this release
-------------------
None
.. _resolved-issues-7:
Resolved Issues
---------------
- Improved throughput for operators processed in the Neuron Runtime
CPU. As an example: execution of 4 single NeuronCore NEFF models of
ResNet50 v2 float16 batch = 5 in parallel on an inf1.1xlarge sped up
by 30%.
- Corrected shape handling in Gather(TensorFlow)/Take(MXNet) operators
that are processed by the Neuron Runtime in the Neuron Runtime vCPU,
which resolves a possible crash in Neuron Compiler when compiling
models with these operators with some shapes.
- Added support for TensorFlow *OneHot* operator (as a Neuron Runtime
CPU operator).
- Added more internal checking for compiler correctness with newly
defined error messages for this case.
::
“Internal ERROR: Data race between Op1 'Name1(...) [...]' and Op2 'Name2(...) [...]'”
- Fixed out-of-memory issue introduced in 1.0.5939.0 such that some
large models (BERT) compiled on instances with insufficient host
memory would cause the runtime to crash with an invalid NEFF. This is
actually a compiler error, but due to additional script layers
wrapping this in the :ref:`tensorflow-bert-demo`, this would
have likely been seen as a runtime error like this:
.. code:: bash
2020-01-09 13:40:26.002594: E tensorflow/core/framework/op_segment.cc:54] Create kernel failed: Invalid argument: neff is invalid
2020-01-09 13:40:26.002637: E tensorflow/core/common_runtime/executor.cc:642] Executor failed to create kernel. Invalid argument: neff is invalid
[[{{node bert/NeuronOp}}]]
.. _known-issues-and-limitations-2:
Known issues and limitations
----------------------------
See previous release notes. Some tutorials show use of specific compiler
options and flags, these are needed to help provide guidance to the
compiler to achieve best performance in specific cases. Please do not
use in cases other than as shown in the specific tutorial as results may
not be defined. These options should be considered experimental and will
be removed over time.
.. _other-notes-5:
Other Notes
-----------
.. _dependencies-4:
Dependencies
------------
::
dmlc_nnvm-1.0.1619.0
dmlc_topi-1.0.1619.0
dmlc_tvm-1.0.1619.0
inferentia_hwm-1.0.839.0
islpy-2018.2
.. _1059390:
[1.0.5939.0]
^^^^^^^^^^^^
Date 12/20/2019
.. _summary-9:
Summary
-------
Bug fixes and some performance enhancement for NeuronCore Pipeline.
.. _major-new-features-8:
New in this release
-------------------
.. _resolved-issues-8:
Resolved Issues
---------------
- Fixed pipeline execution on more than 10 NeuronCores
- Improved NeuronCores Pipeline execution by improving data exchange
efficiency between NeuronCores
- Added warning for unaligned memory access
- Fixed handling of cast on input FP32 tensor
- Improved handling of data layouts and transpose
- Improved dead-code elimination
- Improved efficiency of compute engine synchronization
- Improved efficiency of data transfers within the Neuron code
.. _known-issues-and-limitations-3:
Known issues and limitations
----------------------------
See previous release notes. Some tutorials show use of specific compiler
options and flags, these are needed to help provide guidance to the
compiler to achieve best performance in specific cases. Please do not
use in cases other than as shown in the specific tutorial as results may
not be defined. These options should be considered experimental and will
be removed over time.
.. _other-notes-6:
Other Notes
-----------
.. _dependencies-5:
Dependencies
------------
- dmlc_nnvm-1.0.1416.0
- dmlc_topi-1.0.1416.0
- dmlc_tvm-1.0.1416.0
- inferentia_hwm-1.0.720.0
- islpy-2018.2
.. _1053010:
[1.0.5301.0]
^^^^^^^^^^^^
Date 12/1/2019
.. _summary-10:
Summary
-------
.. _major-new-features-9:
New in this release
-------------------
.. _resolved-issues-9:
Resolved Issues
---------------
- Added warning for unsupported operators and convolution sizes
- Added warning for unsupported layout / upsampling
- Added support for Relu6, AddV2, BatchMatmulV2 operators
- Added support for default MXNet outputs in –io-config
- Improved performance of batched inference for convolutional networks
- Fixed MatMult column size 1
- Fixed bf16 constant loading
- Fixed Conv2D tile accumulation
.. _known-issues-and-limitations-4:
Known Issues and Limitations
----------------------------
See previous release notes. Resolved issues are shown in Resolved
Issues.
.. _other-notes-7:
Other Notes
-----------
Please install g++ on AMIs without g++ pre-installed (i.e. server AMIs):
.. code:: bash
# Ubuntu
sudo apt-get install -y g++
.. code:: bash
# Amazon Linux
sudo yum nstall -y gcc-c++
Supported Python versions:
- 3.5, 3.6, 3.7
Supported Linux distributions:
- Ubuntu 16, Ubuntu 18, Amazon Linux 2
.. _dependencies-6:
Dependencies
------------
- dmlc_nnvm-1.0.1328.0
- dmlc_topi-1.0.1328.0
- dmlc_tvm-1.0.1328.0
- inferentia_hwm-1.0.674.0
- islpy-2018.2
.. _1046800:
[1.0.4680.0]
^^^^^^^^^^^^
Date: 11/25/2019
.. _major-new-features-10:
New in this release
-------------------
N/A, this is the first release.
.. _resolved-issues-10:
Resolved issues
---------------
N/A, this is the first release.
.. _known-issues-and-limitations-5:
Known issues and limitations
----------------------------
1. **Control flow** Inferentia has a limited support for control flow.
In general, Neuron can only support control flow operators which are
static at compile time, i.e. static length RNN, top-k, sort, ...
2. **Size of neural network** The size of neural network is influenced
by a) type of neural network (CNN, LSTM, MLP) , b) number of layers,
c) sizes of input (dimension of the tensors, batch size, ...). The
current Neuron compiler release has a limitation in terms of the size
of neural network it could effectively optimize. As a result, we
limit CNN models (e.g. ResNet) to have an input size of up to 480x480
FP16, batch size of 4; LSTM models (e.g. GNMT) are limited to a time
step limit of up to 900; MLP models (like BERT) are limited up to
sequence-length equal 128, batch=8.
3. **Data layout** The Neuron compiler supports multiple data layout
formats (NCHW, NHWC, ...). Non-CNHW input/output data-layouts will
require Neuron to insert additional *transpose* operations, causing a
degradation in performance.
4. **Object detection models** Computer-vision object detection and
segmentation models are not supported by the current release.
5. **Reduce data type** INT8 data type is not currently supported by the
Neuron compiler.
6. **Tensor residency** When a sub-graph that is executed on the host is
communicating with a sub-graph that is executing on Neuron cores,
tensors are copied via the communication queues between the host and
Inferentia memory for each inference, which may result in end-to-end
performance degradation.
7. **Primary inputs in NeuronCore Pipeline mode** When a neural network
is executed in NeuronCore Pipeline mode, only the first operator in a
neural network can receive primary inputs from the host.
.. _other-notes-8:
Other Notes
-----------
.. _dependencies-7:
Dependencies
------------
- nnvm: dmlc_nnvm-1.0.1219.0
- topi: dmlc_topi-1.0.1219.0
- tvm: dmlc_tvm-1.0.1219.0
- hwm: inferentia_hwm-1.0.602.0
- islpy: islpy-2018.2+aws2018.x.73.0
</pre></body></html>
|
2023-09-29T20:54:58.925Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/customcxxps/gpsimd-tools.rst.txt
|
```
.. _gpsimd-customop-tools-rn:
Neuron Custom C++ Tools Release Notes
======================================
aws-neuronx-gpsimd-tools [0.1]
------------------------------
Date: 02/08/2023
* First release of aws-neuronx-gpsimd-tools. This release provides the required tools to support the building of Neuron Custom C++ operators.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _gpsimd-customop-tools-rn:
Neuron Custom C++ Tools Release Notes
======================================
aws-neuronx-gpsimd-tools [0.1]
------------------------------
Date: 02/08/2023
* First release of aws-neuronx-gpsimd-tools. This release provides the required tools to support the building of Neuron Custom C++ operators.
</pre></body></html>
|
2023-09-29T20:54:58.950Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuron-cc/developer-guide.rst.txt
|
```
Developer Guide
===================
.. toctree::
:maxdepth: 1
/general/appnotes/neuron-cc/mixed-precision
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Developer Guide
===================
.. toctree::
:maxdepth: 1
/general/appnotes/neuron-cc/mixed-precision</pre></body></html>
|
2023-09-29T20:54:58.977Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/compiler/neuron-cc/faq.rst.txt
|
```
.. _neuron_compiler_faq:
Neuron Compiler FAQ (``neuron-cc``)
===================================
.. contents:: Table of contents
:local:
:depth: 1
Where can I compile to Neuron?
---------------------------------
The one-time compilation step from the standard framework-level model to
NEFF binary may be performed on any EC2 instance or even
on-premises.
We recommend using a high-performance compute server of choice (C5 or
z1d instance types), for the fastest compile times and ease of use with
a prebuilt `DLAMI <https://aws.amazon.com/machine-learning/amis/>`__.
Developers can also install Neuron in their own environments; this
approach may work well for example when building a large fleet for
inference, allowing the model creation, training and compilation to be
done in the training fleet, with the NEFF files being distributed by a
configuration management application to the inference fleet.
My current Neural Network is based on FP32, how can I use it with Neuron?
-------------------------------------------------------------------------
Developers who want to train their models in FP32 for best accuracy can
compile and deploy them with Neuron. The Neuron compiler automatically converts
FP32 to internally supported datatypes, such as FP16 or BF16.
You can find more details about FP32 data type support
and performance and accuracy tuning
in :ref:`neuron-cc-training-mixed-precision`.
The Neuron compiler preserves the application interface - FP32 inputs and outputs.
Transferring such large tensors may become a bottleneck for your application.
Therefore, you can improve execution time by casting the inputs and outputs to
FP16 or BF16 in the ML framework prior to compilation for Inferentia.
What are some of the important compiler defaults I should be aware of?
-----------------------------------------------------------------------
The compiler compiles the input graph for a single NeuronCore by default. Using the :option:`--neuroncore-pipeline-cores` option directs the compiler to
partition so as to run on a specified number of NeuronCores. This number can
be less than the total available NeuronCores on an instance.
See :ref:`inferentia-arch` for more information on NeuronCores.
Which operators does Neuron support?
---------------------------------------
see :ref:`neuron-supported-operators`.
You can also use the "neuron-cc list-operators" command on the cli to list the
operators. See :ref:`neuron-cc-list-operators`
If your model contains operators missing from the above list, and you can't reach your performance goals, please
post a message on the Neuron developer forum or open a github issue to let us know.
Any operators that Neuron doesn't support?
---------------------------------------------
Models with control-flow and dynamic shapes are not supported. You will
need to partition the model using the framework prior to compilation.
See the :ref:`neuron-cc`.
Will I need to recompile again if I updated runtime/driver version?
----------------------------------------------------------------------
The compiler and runtime are committed to maintaining compatibility for
major version releases with each other. The versioning is defined as
major.minor, with compatibility for all versions with the same major
number. If the versions mismatch, an error notification is logged and
the load will fail. This will then require the model to be recompiled.
I have a NEFF binary, how can I tell which compiler version
-----------------------------------------------------------
generated it?** We will bring a utility out to help with this soon.
How long does it take to compile?
------------------------------------
It depends on the model and its size and complexity, but this generally
takes a few minutes.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_compiler_faq:
Neuron Compiler FAQ (``neuron-cc``)
===================================
.. contents:: Table of contents
:local:
:depth: 1
Where can I compile to Neuron?
---------------------------------
The one-time compilation step from the standard framework-level model to
NEFF binary may be performed on any EC2 instance or even
on-premises.
We recommend using a high-performance compute server of choice (C5 or
z1d instance types), for the fastest compile times and ease of use with
a prebuilt `DLAMI <https://aws.amazon.com/machine-learning/amis/>`__.
Developers can also install Neuron in their own environments; this
approach may work well for example when building a large fleet for
inference, allowing the model creation, training and compilation to be
done in the training fleet, with the NEFF files being distributed by a
configuration management application to the inference fleet.
My current Neural Network is based on FP32, how can I use it with Neuron?
-------------------------------------------------------------------------
Developers who want to train their models in FP32 for best accuracy can
compile and deploy them with Neuron. The Neuron compiler automatically converts
FP32 to internally supported datatypes, such as FP16 or BF16.
You can find more details about FP32 data type support
and performance and accuracy tuning
in :ref:`neuron-cc-training-mixed-precision`.
The Neuron compiler preserves the application interface - FP32 inputs and outputs.
Transferring such large tensors may become a bottleneck for your application.
Therefore, you can improve execution time by casting the inputs and outputs to
FP16 or BF16 in the ML framework prior to compilation for Inferentia.
What are some of the important compiler defaults I should be aware of?
-----------------------------------------------------------------------
The compiler compiles the input graph for a single NeuronCore by default. Using the :option:`--neuroncore-pipeline-cores` option directs the compiler to
partition so as to run on a specified number of NeuronCores. This number can
be less than the total available NeuronCores on an instance.
See :ref:`inferentia-arch` for more information on NeuronCores.
Which operators does Neuron support?
---------------------------------------
see :ref:`neuron-supported-operators`.
You can also use the "neuron-cc list-operators" command on the cli to list the
operators. See :ref:`neuron-cc-list-operators`
If your model contains operators missing from the above list, and you can't reach your performance goals, please
post a message on the Neuron developer forum or open a github issue to let us know.
Any operators that Neuron doesn't support?
---------------------------------------------
Models with control-flow and dynamic shapes are not supported. You will
need to partition the model using the framework prior to compilation.
See the :ref:`neuron-cc`.
Will I need to recompile again if I updated runtime/driver version?
----------------------------------------------------------------------
The compiler and runtime are committed to maintaining compatibility for
major version releases with each other. The versioning is defined as
major.minor, with compatibility for all versions with the same major
number. If the versions mismatch, an error notification is logged and
the load will fail. This will then require the model to be recompiled.
I have a NEFF binary, how can I tell which compiler version
-----------------------------------------------------------
generated it?** We will bring a utility out to help with this soon.
How long does it take to compile?
------------------------------------
It depends on the model and its size and complexity, but this generally
takes a few minutes.
</pre></body></html>
|
2023-09-29T20:54:59.083Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/compiler/neuron-cc/neuron-cc-ops/index.rst.txt
|
```
.. _neuron-supported-operators:
Neuron Supported operators
==========================
.. toctree::
:maxdepth: 1
frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops
neuron-cc-ops-tensorflow
neuron-cc-ops-pytorch
neuron-cc-ops-mxnet
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-supported-operators:
Neuron Supported operators
==========================
.. toctree::
:maxdepth: 1
frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops
neuron-cc-ops-tensorflow
neuron-cc-ops-pytorch
neuron-cc-ops-mxnet</pre></body></html>
|
2023-09-29T20:54:59.221Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/programming-guide/programming-guide.rst.txt
|
```
Developer Guide
===============
.. toctree::
:maxdepth: 1
/neuron-customops/programming-guide/custom-c++-operators-devguide
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Developer Guide
===============
.. toctree::
:maxdepth: 1
/neuron-customops/programming-guide/custom-c++-operators-devguide</pre></body></html>
|
2023-09-29T20:54:59.262Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/misc-customops.rst.txt
|
```
Misc (Neuron Custom C++ Operators)
==================================
.. toctree::
:maxdepth: 1
/release-notes/customcxxps/gpsimd-tools
/release-notes/customcxxps/gpsimd-customop-lib
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Misc (Neuron Custom C++ Operators)
==================================
.. toctree::
:maxdepth: 1
/release-notes/customcxxps/gpsimd-tools
/release-notes/customcxxps/gpsimd-customop-lib</pre></body></html>
|
2023-09-29T20:54:59.408Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/programming-guide/custom-c++-operators-devguide.rst.txt
|
```
.. _feature-custom-operators-devguide:
Neuron Custom C++ Operators Developer Guide [Experimental]
==========================================================
This document gives an overview of the Neuron Custom C++ Operator feature and APIs . Currently, CustomOp support is limited to the PyTorch framework.
Please refer to the following documents for further information regarding Neuron Custom C++ Operators:
* :ref:`neuronx-customop-mlp-tutorial`
* :ref:`neuronx-customop-mlp-perf`
* :ref:`custom-ops-api-ref-guide`
.. contents:: Table of contents
:local:
:depth: 1
Setup & Installation
--------------------
.. note::
The name of ``aws-neuronx-gpsimd-customop`` has been changed to ``aws-neuronx-gpsimd-customop-lib`` as of the neuron 2.10 release.
We provide tooling and library packages (RPM and DEB) that can be installed on TRN1 and INF2 instances:
::
aws-neuronx-gpsimd-tools-0.3
aws-neuronx-gpsimd-customop-lib-0.3
They can be installed with the following commands:
::
sudo yum remove aws-neuronx-gpsimd-tools-0.* -y
sudo yum remove aws-neuronx-gpsimd-customop-lib-0.* -y
sudo yum install aws-neuronx-gpsimd-tools-0.* -y
sudo yum install aws-neuronx-gpsimd-customop-lib-0.* -y
Implementing an operator in C++
-------------------------------
Custom operators require a function that defines the custom computation. We define this as the **kernel function**. Neuron Custom C++ Operators also contain a **shape function** separate from the normal compute code. This *shape function* defines the shapes of output tensors for a given set of inputs to the operator. This is needed because PyTorch Neuron (torch-neuronx) is based on the PyTorch/XLA software package and uses a Just-In-Time (JIT) compilation strategy. At runtime the operators in the model will be compiled into a binary to be executed on the NeuronCore. During compilation the shapes of the input and output tensors to operators are computed. The **shape function** is executed on the host, whereas the **kernel function** is executed on the NeuronCore.
Kernel Function
^^^^^^^^^^^^^^^
The kernel function contains the C++ implementation of the CustomOp, as shown in the example below. By including torch.h in the source, the developer has access to a NeuronCore-ported subset of the torch C++ api (https://pytorch.org/cppdocs/). The port contains everything required for CustomOp development and model integration, specifically Tensor and Scalar classes in c10, and a subset of aTen operators.
::
#include <stdint.h>
#include <stdlib.h>
#include <torch/torch.h>
torch::Tensor tensor_negate_compute(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
auto t_in_acc = t_in.accessor<float, 1>();
auto t_out_acc = t_out.accessor<float, 1>();
for (size_t i = 0; i < num_elem; i++) {
t_out_acc[i] = -1 * t_in_acc[i];
}
return t_out;
}
The kernel function is the main computational code for the operator. We support a subset of the input types usable by regular PyTorch Custom Operators: ``torch::Tensor``, ``torch::Scalar``, ``double``, and ``int64_t``. However we do not support ``std::vector`` or ``std::tuple`` of these types at this time. Note that similar to regular PyTorch Custom Operators, only ``double`` and not ``float``, and only ``int64_t`` and not other integral types such as ``int``, ``short`` or ``long`` are supported. The return value must be a ``torch::Tensor``.
The body of the kernel function may exercise C/C++ libraries, ``torch::Tensor`` classes, and select aTen operators, as is customary for Torch programming. For high performance, feature offerings provide faster memory access, via new Tensor Accessor classes and stack management compiler flags. Additionally, higher performance can be obtained by parallelizing execution of the kernel over multiple GPSIMD cores. See the :ref:`custom-ops-api-ref-guide` for more details.
Finally, because the kernel is specially compiled for and run by the NeuronCore target, its tooling, libraries, and environment differ from the host pytorch installation. For example, while the host may run Pytorch 1.13 and a C++17 compatible compiler in a linux environment, the NeuronCore may run a port of Pytorch 1.12 (c10) and LLVM’s libc++ C++14 version 10.0.1 without linux. Developers must develop for the compiler, torch version, and environment of their targeted NeuronCore. See the :ref:`custom-ops-api-ref-guide` for more details.
Shape Function
^^^^^^^^^^^^^^
The shape function has the same function signature as the kernel function, but does not perform any computations. Rather, it only defines the shape of the output tensor but not the actual values.
::
#include <stdint.h>
#include <stdlib.h>
#include <torch/torch.h>
torch::Tensor tensor_negate_shape(torch::Tensor t1) {
size_t num_elem = t1.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
return t_out;
}
The body of the shape function may exercize C/C++ libraries or ``torch::Tensor`` classes. The body may not access the data of input tensors since these are XLA Tensors and do not have any data storage allocated yet. However, any of the functions that access shape information such as *numel* (to get the number of elements) may be used.
Building and executing operators
--------------------------------
Once you have the kernel and shape functions for your operators you can build them into a library to use them from PyTorch in your model. Just like regular PyTorch Custom Operators, Neuron Custom C++ Operators use a registration macro to associate the kernel and shape functions with the name of the operator that will be called from Python.
Similar to PyTorch, Neuron Custom C++ Operators are grouped into libraries defined within the ``NEURON_LIBRARY(<lib_name>, m)`` scope, where lib_name is the name of your library of custom operators. Within this scope, calls to ``m.def(<op_name>, <shape_fcn>, <kernel_fcn>)`` define each operator in your library. The ``op_name`` is the name to call the operator with in the model (i.e. ``torch.ops.lib_name.op_name()``). The ``shape_fcn`` is a function pointer to the shape function to call during compilation. Finally the ``kernel_fcn`` is the name of the function to be executed on the NeuronCore at runtime.
::
#include <stdint.h>
#include <stdlib.h>
#include <torch/torch.h>
#include "torchneuron/register.h"
torch::Tensor tensor_negate_shape(torch::Tensor t1) {
size_t num_elem = t1.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
return t_out;
}
NEURON_LIBRARY(my_ops, m) {
m.def("tensor_negate", &tensor_negate_shape, "tensor_negate_compute");
}
Notice that the ``NEURON_LIBRARY`` macro is used in the same C++ file as the shape function. This is because the registration is loaded on the host.
The custom op library is built by calling the ``load`` API in Python like:
::
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load(
name=name,
compute_srcs=['kernel.cpp'],
shape_srcs=['shape.cpp']
)
In the example above, name refers to the name of the library file to be created (i.e. ``libmy_ops.so``) and the ``compute_srcs`` and ``shape_srcs`` are lists of files to be compiled. After the ``load`` API completes, the library will have been compiled and loaded into the current PyTorch process.
Similar to PyTorch, the Neuron custom op will be available at ``torch.ops.<lib_name>.<op_name>`` where ``lib_name`` is defined in the ``NEURON_LIBRARY`` macro, and ``op_name`` is defined in the call to ``m.def``.
::
import torch
out_tensor = torch.ops.my_ops.tensor_negate(in_tensor)
Loading a previously built library
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The library can also be built ahead of time or in a separate process and loaded later. In the ``load`` API, specify the ``build_directory`` argument and the library will be written to that location on disk.
::
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load(
name=name,
compute_srcs=['kernel.cpp'],
shape_srcs=['shape.cpp'],
build_directory*=*os.getcwd(),
)
Then, later, this library can be loaded by calling the ``load_library`` API and using the ops in the exact same way.
::
import torch
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load_library('/home/user/libmy_ops.so')
out_tensor = torch.ops.my_ops.tensor_negate(in_tensor)
Note: The ``load_library`` API does not need to be called in the same process where the library is built with the load API. Similar to regular PyTorch Custom Operators, Neuron Custom C++ Operators are built and loaded at the same time when the ``load`` API is called.
Performance Guidance
--------------------
When possible, it is recommended that operators supported by the designated framework with supported compilation onto Neuron devices are used. These operators have been have been highly optimized for the Neuron architecture. However, for other scenarios where Custom C++ operators are the required solution, the following recommendations can be followed to improve performance:
* Use the provided memory management accessors (streaming and tcm accessor). Both of these accessors improve data fetch overhead. See the :ref:`custom-ops-api-ref-guide` for more information.
* You can optionally specify the estimated amount of stack space (in bytes) used in your Custom C++ operator via the ``extra_cflags`` argument in the call to ``custom_op.load()``. For instance, if you anticipate your operator using ~20KB of stack space, include the argument ``extra_cflags=['-DSTACK_SIZE=20000']`` in the call to custom_op.load(). **This is necessary only if you anticipate the stack to grow beyond 6KB.** Otherwise, the stack will automatically be placed in local memory which significantly improves performance. Note, however, that if you do not specify the stack size but your stack grows beyond 6KB, there's a risk of a stack overflow, and you will be notified with an error message from GPSIMD should such a case occur. If you do specify a stack size, the maximum supported stack size is 400KB.
* Use multiple GPSIMD cores when possible to parallelize (and hence improve performance) of Custom C++ operator, refer to `Using multiple GPSIMD cores` section in :ref:`custom-ops-api-ref-guide` for more information.
Functional Debug
----------------
Custom C++ operators support the use of the C++ language's ``printf()``. For functional debug, the recommended approach is using ``printf()`` to print input, intermediate, and final values. Consult the :ref:`custom-ops-api-ref-guide` for more information.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _feature-custom-operators-devguide:
Neuron Custom C++ Operators Developer Guide [Experimental]
==========================================================
This document gives an overview of the Neuron Custom C++ Operator feature and APIs . Currently, CustomOp support is limited to the PyTorch framework.
Please refer to the following documents for further information regarding Neuron Custom C++ Operators:
* :ref:`neuronx-customop-mlp-tutorial`
* :ref:`neuronx-customop-mlp-perf`
* :ref:`custom-ops-api-ref-guide`
.. contents:: Table of contents
:local:
:depth: 1
Setup & Installation
--------------------
.. note::
The name of ``aws-neuronx-gpsimd-customop`` has been changed to ``aws-neuronx-gpsimd-customop-lib`` as of the neuron 2.10 release.
We provide tooling and library packages (RPM and DEB) that can be installed on TRN1 and INF2 instances:
::
aws-neuronx-gpsimd-tools-0.3
aws-neuronx-gpsimd-customop-lib-0.3
They can be installed with the following commands:
::
sudo yum remove aws-neuronx-gpsimd-tools-0.* -y
sudo yum remove aws-neuronx-gpsimd-customop-lib-0.* -y
sudo yum install aws-neuronx-gpsimd-tools-0.* -y
sudo yum install aws-neuronx-gpsimd-customop-lib-0.* -y
Implementing an operator in C++
-------------------------------
Custom operators require a function that defines the custom computation. We define this as the **kernel function**. Neuron Custom C++ Operators also contain a **shape function** separate from the normal compute code. This *shape function* defines the shapes of output tensors for a given set of inputs to the operator. This is needed because PyTorch Neuron (torch-neuronx) is based on the PyTorch/XLA software package and uses a Just-In-Time (JIT) compilation strategy. At runtime the operators in the model will be compiled into a binary to be executed on the NeuronCore. During compilation the shapes of the input and output tensors to operators are computed. The **shape function** is executed on the host, whereas the **kernel function** is executed on the NeuronCore.
Kernel Function
^^^^^^^^^^^^^^^
The kernel function contains the C++ implementation of the CustomOp, as shown in the example below. By including torch.h in the source, the developer has access to a NeuronCore-ported subset of the torch C++ api (https://pytorch.org/cppdocs/). The port contains everything required for CustomOp development and model integration, specifically Tensor and Scalar classes in c10, and a subset of aTen operators.
::
#include <stdint.h>
#include <stdlib.h>
#include <torch/torch.h>
torch::Tensor tensor_negate_compute(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
auto t_in_acc = t_in.accessor<float, 1>();
auto t_out_acc = t_out.accessor<float, 1>();
for (size_t i = 0; i < num_elem; i++) {
t_out_acc[i] = -1 * t_in_acc[i];
}
return t_out;
}
The kernel function is the main computational code for the operator. We support a subset of the input types usable by regular PyTorch Custom Operators: ``torch::Tensor``, ``torch::Scalar``, ``double``, and ``int64_t``. However we do not support ``std::vector`` or ``std::tuple`` of these types at this time. Note that similar to regular PyTorch Custom Operators, only ``double`` and not ``float``, and only ``int64_t`` and not other integral types such as ``int``, ``short`` or ``long`` are supported. The return value must be a ``torch::Tensor``.
The body of the kernel function may exercise C/C++ libraries, ``torch::Tensor`` classes, and select aTen operators, as is customary for Torch programming. For high performance, feature offerings provide faster memory access, via new Tensor Accessor classes and stack management compiler flags. Additionally, higher performance can be obtained by parallelizing execution of the kernel over multiple GPSIMD cores. See the :ref:`custom-ops-api-ref-guide` for more details.
Finally, because the kernel is specially compiled for and run by the NeuronCore target, its tooling, libraries, and environment differ from the host pytorch installation. For example, while the host may run Pytorch 1.13 and a C++17 compatible compiler in a linux environment, the NeuronCore may run a port of Pytorch 1.12 (c10) and LLVM’s libc++ C++14 version 10.0.1 without linux. Developers must develop for the compiler, torch version, and environment of their targeted NeuronCore. See the :ref:`custom-ops-api-ref-guide` for more details.
Shape Function
^^^^^^^^^^^^^^
The shape function has the same function signature as the kernel function, but does not perform any computations. Rather, it only defines the shape of the output tensor but not the actual values.
::
#include <stdint.h>
#include <stdlib.h>
#include <torch/torch.h>
torch::Tensor tensor_negate_shape(torch::Tensor t1) {
size_t num_elem = t1.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
return t_out;
}
The body of the shape function may exercize C/C++ libraries or ``torch::Tensor`` classes. The body may not access the data of input tensors since these are XLA Tensors and do not have any data storage allocated yet. However, any of the functions that access shape information such as *numel* (to get the number of elements) may be used.
Building and executing operators
--------------------------------
Once you have the kernel and shape functions for your operators you can build them into a library to use them from PyTorch in your model. Just like regular PyTorch Custom Operators, Neuron Custom C++ Operators use a registration macro to associate the kernel and shape functions with the name of the operator that will be called from Python.
Similar to PyTorch, Neuron Custom C++ Operators are grouped into libraries defined within the ``NEURON_LIBRARY(<lib_name>, m)`` scope, where lib_name is the name of your library of custom operators. Within this scope, calls to ``m.def(<op_name>, <shape_fcn>, <kernel_fcn>)`` define each operator in your library. The ``op_name`` is the name to call the operator with in the model (i.e. ``torch.ops.lib_name.op_name()``). The ``shape_fcn`` is a function pointer to the shape function to call during compilation. Finally the ``kernel_fcn`` is the name of the function to be executed on the NeuronCore at runtime.
::
#include <stdint.h>
#include <stdlib.h>
#include <torch/torch.h>
#include "torchneuron/register.h"
torch::Tensor tensor_negate_shape(torch::Tensor t1) {
size_t num_elem = t1.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
return t_out;
}
NEURON_LIBRARY(my_ops, m) {
m.def("tensor_negate", &tensor_negate_shape, "tensor_negate_compute");
}
Notice that the ``NEURON_LIBRARY`` macro is used in the same C++ file as the shape function. This is because the registration is loaded on the host.
The custom op library is built by calling the ``load`` API in Python like:
::
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load(
name=name,
compute_srcs=['kernel.cpp'],
shape_srcs=['shape.cpp']
)
In the example above, name refers to the name of the library file to be created (i.e. ``libmy_ops.so``) and the ``compute_srcs`` and ``shape_srcs`` are lists of files to be compiled. After the ``load`` API completes, the library will have been compiled and loaded into the current PyTorch process.
Similar to PyTorch, the Neuron custom op will be available at ``torch.ops.<lib_name>.<op_name>`` where ``lib_name`` is defined in the ``NEURON_LIBRARY`` macro, and ``op_name`` is defined in the call to ``m.def``.
::
import torch
out_tensor = torch.ops.my_ops.tensor_negate(in_tensor)
Loading a previously built library
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The library can also be built ahead of time or in a separate process and loaded later. In the ``load`` API, specify the ``build_directory`` argument and the library will be written to that location on disk.
::
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load(
name=name,
compute_srcs=['kernel.cpp'],
shape_srcs=['shape.cpp'],
build_directory*=*os.getcwd(),
)
Then, later, this library can be loaded by calling the ``load_library`` API and using the ops in the exact same way.
::
import torch
import torch_neuronx
from torch_neuronx.xla_impl import custom_op
custom_op.load_library('/home/user/libmy_ops.so')
out_tensor = torch.ops.my_ops.tensor_negate(in_tensor)
Note: The ``load_library`` API does not need to be called in the same process where the library is built with the load API. Similar to regular PyTorch Custom Operators, Neuron Custom C++ Operators are built and loaded at the same time when the ``load`` API is called.
Performance Guidance
--------------------
When possible, it is recommended that operators supported by the designated framework with supported compilation onto Neuron devices are used. These operators have been have been highly optimized for the Neuron architecture. However, for other scenarios where Custom C++ operators are the required solution, the following recommendations can be followed to improve performance:
* Use the provided memory management accessors (streaming and tcm accessor). Both of these accessors improve data fetch overhead. See the :ref:`custom-ops-api-ref-guide` for more information.
* You can optionally specify the estimated amount of stack space (in bytes) used in your Custom C++ operator via the ``extra_cflags`` argument in the call to ``custom_op.load()``. For instance, if you anticipate your operator using ~20KB of stack space, include the argument ``extra_cflags=['-DSTACK_SIZE=20000']`` in the call to custom_op.load(). **This is necessary only if you anticipate the stack to grow beyond 6KB.** Otherwise, the stack will automatically be placed in local memory which significantly improves performance. Note, however, that if you do not specify the stack size but your stack grows beyond 6KB, there's a risk of a stack overflow, and you will be notified with an error message from GPSIMD should such a case occur. If you do specify a stack size, the maximum supported stack size is 400KB.
* Use multiple GPSIMD cores when possible to parallelize (and hence improve performance) of Custom C++ operator, refer to `Using multiple GPSIMD cores` section in :ref:`custom-ops-api-ref-guide` for more information.
Functional Debug
----------------
Custom C++ operators support the use of the C++ language's ``printf()``. For functional debug, the recommended approach is using ``printf()`` to print input, intermediate, and final values. Consult the :ref:`custom-ops-api-ref-guide` for more information.
</pre></body></html>
|
2023-09-29T20:54:59.419Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/tutorials/tutorials.rst.txt
|
```
Tutorials
=========
.. toctree::
:maxdepth: 1
/neuron-customops/tutorials/customop-mlp-training
/neuron-customops/tutorials/customop-mlp-perf-opt
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Tutorials
=========
.. toctree::
:maxdepth: 1
/neuron-customops/tutorials/customop-mlp-training
/neuron-customops/tutorials/customop-mlp-perf-opt</pre></body></html>
|
2023-09-29T20:54:59.455Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuron-sys-tools/neuron-ls.rst.txt
|
```
.. _neuron-ls-ug:
Neuron LS User Guide
---------------------
To identify number of Neuron Devices in a given instance use the
``neuron-ls`` command. ``neuron-ls`` will also show which processes
are using each Device, including the command used to launch each of
those processes.
.. rubric:: neuron-ls CLI
.. program:: neuron-ls
.. option:: neuron-ls [options]
**Available options:**
- :option:`--wide, -w`: Displays the table in a wider format.
- :option:`--show-all-procs, -a`: Show all processes using the Neuron Devices, including processes that aren't using
Neuron Runtime 2.x such as ``neuron-monitor`` or ``neuron-ls`` itself.
- :option:`--topology, -t`: Display topology information about the system's Neuron Devices.
.. note::
``neuron-ls`` fully supports the newly launched inf2 instances.
Examples
^^^^^^^^
First we will show the output of ``neuron-ls`` on an Inf1.6xlarge instance.
::
$ neuron-ls
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI | PID | COMMAND | RUNTIME |
| DEVICE | CORES | MEMORY | DEVICES | BDF | | | VERSION |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 0 | 4 | 8 GB | 1 | 0000:00:1c.0 | 23518 | neuron-app01 infer --input-data-direc... | 2.0.0 |
| | | | | | 23531 | neuron-app02 infer --input-data-direc... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 1 | 4 | 8 GB | 2, 0 | 0000:00:1d.0 | 23595 | neuron-app01 infer --input-data-direc... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 2 | 4 | 8 GB | 3, 1 | 0000:00:1e.0 | 23608 | neuron-app02 infer --input-data-direc... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 3 | 4 | 8 GB | 2 | 0000:00:1f.0 | NA | NA | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
$ neuron-ls --wide
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI | PID | COMMAND | RUNTIME |
| DEVICE | CORES | MEMORY | DEVICES | BDF | | | VERSION |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 0 | 4 | 8 GB | 1 | 0000:00:1c.0 | 23518 | neuron-app01 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
| | | | | | 23531 | neuron-app02 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 1 | 4 | 8 GB | 2, 0 | 0000:00:1d.0 | 23595 | neuron-app01 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 2 | 4 | 8 GB | 3, 1 | 0000:00:1e.0 | 23608 | neuron-app02 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 3 | 4 | 8 GB | 2 | 0000:00:1f.0 | NA | NA | NA |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
$ neuron-ls --show-all-procs
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI | PID | COMMAND | RUNTIME |
| DEVICE | CORES | MEMORY | DEVICES | BDF | | | VERSION |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 0 | 4 | 8 GB | 1 | 0000:00:1c.0 | 23518 | neuron-app01 infer --input-data-direc... | 2.0.0 |
| | | | | | 23531 | neuron-app02 infer --input-data-direc... | 2.0.0 |
| | | | | | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 1 | 4 | 8 GB | 2, 0 | 0000:00:1d.0 | 23595 | neuron-app01 infer --input-data-direc... | 2.0.0 |
| | | | | | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 2 | 4 | 8 GB | 3, 1 | 0000:00:1e.0 | 23608 | neuron-app02 infer --input-data-direc... | 2.0.0 |
| | | | | | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 3 | 4 | 8 GB | 2 | 0000:00:1f.0 | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
$ neuron-ls --topology
+--------+--------+--------+-----------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+-----------+---------+
| 0 | 4 | 8 GB | 1 | 00:1c.0 |
| 1 | 4 | 8 GB | 2, 0 | 00:1d.0 |
| 2 | 4 | 8 GB | 3, 1 | 00:1e.0 |
| 3 | 4 | 8 GB | 2 | 00:1f.0 |
+--------+--------+--------+-----------+---------+
Neuron Device Topology
[ 0 ]◄––►[ 1 ]◄––►[ 2 ]◄––►[ 3 ]
On Trn1 and Inf2 instances ``neuron-ls`` works similarly. Below is an example displaying the topology for a Trn1.32xlarge instance.
::
$ neuron-ls --topology
+--------+--------+--------+---------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+---------------+---------+
| 0 | 2 | 32 GB | 12, 3, 4, 1 | 00:04.0 |
| 1 | 2 | 32 GB | 13, 0, 5, 2 | 00:05.0 |
| 2 | 2 | 32 GB | 14, 1, 6, 3 | 00:06.0 |
| 3 | 2 | 32 GB | 15, 2, 7, 0 | 00:07.0 |
| 4 | 2 | 32 GB | 0, 7, 8, 5 | 00:08.0 |
| 5 | 2 | 32 GB | 1, 4, 9, 6 | 00:09.0 |
| 6 | 2 | 32 GB | 2, 5, 10, 7 | 00:0a.0 |
| 7 | 2 | 32 GB | 3, 6, 11, 4 | 00:0b.0 |
| 8 | 2 | 32 GB | 4, 11, 12, 9 | 00:0c.0 |
| 9 | 2 | 32 GB | 5, 8, 13, 10 | 00:0d.0 |
| 10 | 2 | 32 GB | 6, 9, 14, 11 | 00:0e.0 |
| 11 | 2 | 32 GB | 7, 10, 15, 8 | 00:0f.0 |
| 12 | 2 | 32 GB | 8, 15, 0, 13 | 00:10.0 |
| 13 | 2 | 32 GB | 9, 12, 1, 14 | 00:11.0 |
| 14 | 2 | 32 GB | 10, 13, 2, 15 | 00:12.0 |
| 15 | 2 | 32 GB | 11, 14, 3, 12 | 00:13.0 |
+--------+--------+--------+---------------+---------+
Neuron Device Topology
* * * *
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 0 ]◄––►[ 1 ]◄––►[ 2 ]◄––►[ 3 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 4 ]◄––►[ 5 ]◄––►[ 6 ]◄––►[ 7 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 8 ]◄––►[ 9 ]◄––►[10 ]◄––►[11 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[12 ]◄––►[13 ]◄––►[14 ]◄––►[15 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
* * * *
- NEURON DEVICE: Logical ID assigned to the Neuron Device.
- NEURON CORES: Number of NeuronCores present in the Neuron Device.
- NEURON MEMORY: Amount DRAM memory in Neuron Device.
- CONNECTED DEVICES: Logical ID of Neuron Devices connected to this
Neuron Device.
- PCI BDF: PCI Bus Device Function (BDF) ID of the device.
- PID: ID of the process using this NeuronDevice.
- COMMAND: Command used to launch the process using this
Neuron Device.
- RUNTIME VERSION: Version of Neuron Runtime (if applicable) for
the application using this Neuron Device.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-ls-ug:
Neuron LS User Guide
---------------------
To identify number of Neuron Devices in a given instance use the
``neuron-ls`` command. ``neuron-ls`` will also show which processes
are using each Device, including the command used to launch each of
those processes.
.. rubric:: neuron-ls CLI
.. program:: neuron-ls
.. option:: neuron-ls [options]
**Available options:**
- :option:`--wide, -w`: Displays the table in a wider format.
- :option:`--show-all-procs, -a`: Show all processes using the Neuron Devices, including processes that aren't using
Neuron Runtime 2.x such as ``neuron-monitor`` or ``neuron-ls`` itself.
- :option:`--topology, -t`: Display topology information about the system's Neuron Devices.
.. note::
``neuron-ls`` fully supports the newly launched inf2 instances.
Examples
^^^^^^^^
First we will show the output of ``neuron-ls`` on an Inf1.6xlarge instance.
::
$ neuron-ls
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI | PID | COMMAND | RUNTIME |
| DEVICE | CORES | MEMORY | DEVICES | BDF | | | VERSION |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 0 | 4 | 8 GB | 1 | 0000:00:1c.0 | 23518 | neuron-app01 infer --input-data-direc... | 2.0.0 |
| | | | | | 23531 | neuron-app02 infer --input-data-direc... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 1 | 4 | 8 GB | 2, 0 | 0000:00:1d.0 | 23595 | neuron-app01 infer --input-data-direc... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 2 | 4 | 8 GB | 3, 1 | 0000:00:1e.0 | 23608 | neuron-app02 infer --input-data-direc... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 3 | 4 | 8 GB | 2 | 0000:00:1f.0 | NA | NA | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
$ neuron-ls --wide
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI | PID | COMMAND | RUNTIME |
| DEVICE | CORES | MEMORY | DEVICES | BDF | | | VERSION |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 0 | 4 | 8 GB | 1 | 0000:00:1c.0 | 23518 | neuron-app01 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
| | | | | | 23531 | neuron-app02 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 1 | 4 | 8 GB | 2, 0 | 0000:00:1d.0 | 23595 | neuron-app01 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 2 | 4 | 8 GB | 3, 1 | 0000:00:1e.0 | 23608 | neuron-app02 infer --input-data-directory ~/my_input_data --inference-count 5... | 2.0.0 |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
| 3 | 4 | 8 GB | 2 | 0000:00:1f.0 | NA | NA | NA |
+--------+--------+--------+-----------+--------------+-------+----------------------------------------------------------------------------------+---------+
$ neuron-ls --show-all-procs
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI | PID | COMMAND | RUNTIME |
| DEVICE | CORES | MEMORY | DEVICES | BDF | | | VERSION |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 0 | 4 | 8 GB | 1 | 0000:00:1c.0 | 23518 | neuron-app01 infer --input-data-direc... | 2.0.0 |
| | | | | | 23531 | neuron-app02 infer --input-data-direc... | 2.0.0 |
| | | | | | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 1 | 4 | 8 GB | 2, 0 | 0000:00:1d.0 | 23595 | neuron-app01 infer --input-data-direc... | 2.0.0 |
| | | | | | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 2 | 4 | 8 GB | 3, 1 | 0000:00:1e.0 | 23608 | neuron-app02 infer --input-data-direc... | 2.0.0 |
| | | | | | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
| 3 | 4 | 8 GB | 2 | 0000:00:1f.0 | 23764 | neuron-monitor | NA |
| | | | | | 23829 | neuron-ls --show-all-procs | NA |
+--------+--------+--------+-----------+--------------+-------+------------------------------------------+---------+
$ neuron-ls --topology
+--------+--------+--------+-----------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+-----------+---------+
| 0 | 4 | 8 GB | 1 | 00:1c.0 |
| 1 | 4 | 8 GB | 2, 0 | 00:1d.0 |
| 2 | 4 | 8 GB | 3, 1 | 00:1e.0 |
| 3 | 4 | 8 GB | 2 | 00:1f.0 |
+--------+--------+--------+-----------+---------+
Neuron Device Topology
[ 0 ]◄––►[ 1 ]◄––►[ 2 ]◄––►[ 3 ]
On Trn1 and Inf2 instances ``neuron-ls`` works similarly. Below is an example displaying the topology for a Trn1.32xlarge instance.
::
$ neuron-ls --topology
+--------+--------+--------+---------------+---------+
| NEURON | NEURON | NEURON | CONNECTED | PCI |
| DEVICE | CORES | MEMORY | DEVICES | BDF |
+--------+--------+--------+---------------+---------+
| 0 | 2 | 32 GB | 12, 3, 4, 1 | 00:04.0 |
| 1 | 2 | 32 GB | 13, 0, 5, 2 | 00:05.0 |
| 2 | 2 | 32 GB | 14, 1, 6, 3 | 00:06.0 |
| 3 | 2 | 32 GB | 15, 2, 7, 0 | 00:07.0 |
| 4 | 2 | 32 GB | 0, 7, 8, 5 | 00:08.0 |
| 5 | 2 | 32 GB | 1, 4, 9, 6 | 00:09.0 |
| 6 | 2 | 32 GB | 2, 5, 10, 7 | 00:0a.0 |
| 7 | 2 | 32 GB | 3, 6, 11, 4 | 00:0b.0 |
| 8 | 2 | 32 GB | 4, 11, 12, 9 | 00:0c.0 |
| 9 | 2 | 32 GB | 5, 8, 13, 10 | 00:0d.0 |
| 10 | 2 | 32 GB | 6, 9, 14, 11 | 00:0e.0 |
| 11 | 2 | 32 GB | 7, 10, 15, 8 | 00:0f.0 |
| 12 | 2 | 32 GB | 8, 15, 0, 13 | 00:10.0 |
| 13 | 2 | 32 GB | 9, 12, 1, 14 | 00:11.0 |
| 14 | 2 | 32 GB | 10, 13, 2, 15 | 00:12.0 |
| 15 | 2 | 32 GB | 11, 14, 3, 12 | 00:13.0 |
+--------+--------+--------+---------------+---------+
Neuron Device Topology
* * * *
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 0 ]◄––►[ 1 ]◄––►[ 2 ]◄––►[ 3 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 4 ]◄––►[ 5 ]◄––►[ 6 ]◄––►[ 7 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[ 8 ]◄––►[ 9 ]◄––►[10 ]◄––►[11 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
▼ ▼ ▼ ▼
*––►[12 ]◄––►[13 ]◄––►[14 ]◄––►[15 ]◄––*
▲ ▲ ▲ ▲
│ │ │ │
* * * *
- NEURON DEVICE: Logical ID assigned to the Neuron Device.
- NEURON CORES: Number of NeuronCores present in the Neuron Device.
- NEURON MEMORY: Amount DRAM memory in Neuron Device.
- CONNECTED DEVICES: Logical ID of Neuron Devices connected to this
Neuron Device.
- PCI BDF: PCI Bus Device Function (BDF) ID of the device.
- PID: ID of the process using this NeuronDevice.
- COMMAND: Command used to launch the process using this
Neuron Device.
- RUNTIME VERSION: Version of Neuron Runtime (if applicable) for
the application using this Neuron Device.
</pre></body></html>
|
2023-09-29T20:54:59.500Z
|
|
TensorFlow Neuron (tensorflow-neuron (TF1.x)) Supported operators [XLA] — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.html#neuron-cc-ops-xla
|
# TensorFlow Neuron (tensorflow-neuron (TF1.x)) Supported operators \[XLA\] — AWS Neuron Documentation
Toggle in-page Table of Contents
_This document is relevant for_: `Inf1`
## TensorFlow Neuron (`tensorflow-neuron (TF1.x)`) Supported operators \[XLA\][#](#tensorflow-neuron-tensorflow-neuron-tf1-x-supported-operators-xla "Permalink to this headline")
To see a list of supported operators for XLA, run the following command:
`neuron-cc list-operators --framework XLA`
|
Supported XLA Operators
|
Notes
|
| --- | --- |
|
Abs
| |
|
Add
| |
|
Allgather
| |
|
Allreduce
| |
|
Atan2
| |
|
Batchnorm
| |
|
Batchnormgrad
| |
|
Batchnorminference
| |
|
Broadcast
| |
|
BroadcastInDim
| |
|
Ceil
| |
|
Clamp
| |
|
Compare
| |
|
Concatenate
| |
|
Constant
| |
|
ConstantLiteral
| |
|
ConvertElementType
| |
|
Cos
| |
|
Customcall
| |
|
Div
| |
|
Dot
| |
|
DotGeneral
| |
|
DynamicUpdateSlice
|
Supports only for constant index
|
|
Eq
| |
|
Exp
| |
|
Floor
| |
|
Gather
|
Supports only disjoint start\_index\_map and remapped\_offset\_dims
|
|
Ge
| |
|
GetTupleElement
| |
|
Gt
| |
|
Iota
| |
|
Le
| |
|
Log
| |
|
LogicalAnd
| |
|
LogicalNot
| |
|
Lt
| |
|
Max
| |
|
Min
| |
|
Mul
| |
|
Ne
| |
|
Neg
| |
|
Pad
| |
|
Pow
|
Exponent argument must be a compile-time integer constant
|
|
Reduce
|
Min, Max, Add and Mul are the only supported computations. Init\_values must be constant
|
|
Reshape
| |
|
RngBitGenerator
|
Ignores user seed
|
|
RngUniform
| |
|
Rsqrt
| |
|
Scatter
| |
|
Select
| |
|
ShiftRightLogical
| |
|
Sign
| |
|
Sin
| |
|
Slice
| |
|
Sqrt
| |
|
Sub
| |
|
Tanh
| |
|
Transpose
| |
|
Tuple
| |
_This document is relevant for_: `Inf1`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>TensorFlow Neuron (tensorflow-neuron (TF1.x)) Supported operators [XLA] — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../../../_static/pygments.css">
<link rel="stylesheet" href="../../../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../../../" id="documentation_options" src="../../../../_static/documentation_options.js"></script>
<script src="../../../../_static/jquery.js"></script>
<script src="../../../../_static/underscore.js"></script>
<script src="../../../../_static/doctools.js"></script>
<script src="../../../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../../../_static/contentui.js"></script>
<script src="../../../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../../../genindex.html">
<link rel="search" title="Search" href="../../../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../tools/index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tutorials/tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../tools/neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../tools/neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../../release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"></div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Frelease-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../../../_sources/release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-xla.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>TensorFlow Neuron (tensorflow-neuron (TF1.x)) Supported operators [XLA]</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
<div class="section" id="tensorflow-neuron-tensorflow-neuron-tf1-x-supported-operators-xla">
<span id="neuron-cc-ops-xla"></span><h1>TensorFlow Neuron (<code class="docutils literal notranslate"><span class="pre">tensorflow-neuron</span> <span class="pre">(TF1.x)</span></code>) Supported operators [XLA]<a class="headerlink" href="#tensorflow-neuron-tensorflow-neuron-tf1-x-supported-operators-xla" title="Permalink to this headline">#</a></h1>
<p>To see a list of supported operators for XLA, run the following command:</p>
<p><code class="docutils literal notranslate"><span class="pre">neuron-cc</span> <span class="pre">list-operators</span> <span class="pre">--framework</span> <span class="pre">XLA</span></code></p>
<table class="table">
<colgroup>
<col style="width: 37%">
<col style="width: 63%">
</colgroup>
<thead>
<tr class="row-odd"><th class="head"><p>Supported XLA Operators</p></th>
<th class="head"><p>Notes</p></th>
</tr>
</thead>
<tbody>
<tr class="row-even"><td><p>Abs</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Add</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Allgather</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Allreduce</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Atan2</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Batchnorm</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Batchnormgrad</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Batchnorminference</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Broadcast</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>BroadcastInDim</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Ceil</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Clamp</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Compare</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Concatenate</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Constant</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>ConstantLiteral</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>ConvertElementType</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Cos</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Customcall</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Div</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Dot</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>DotGeneral</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>DynamicUpdateSlice</p></td>
<td><p>Supports only for constant index</p></td>
</tr>
<tr class="row-odd"><td><p>Eq</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Exp</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Floor</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Gather</p></td>
<td><p>Supports only disjoint start_index_map
and remapped_offset_dims</p></td>
</tr>
<tr class="row-odd"><td><p>Ge</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>GetTupleElement</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Gt</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Iota</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Le</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Log</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>LogicalAnd</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>LogicalNot</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Lt</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Max</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Min</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Mul</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Ne</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Neg</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Pad</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Pow</p></td>
<td><p>Exponent argument must be a compile-time
integer constant</p></td>
</tr>
<tr class="row-odd"><td><p>Reduce</p></td>
<td><p>Min, Max, Add and Mul are the only
supported computations. Init_values must
be constant</p></td>
</tr>
<tr class="row-even"><td><p>Reshape</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>RngBitGenerator</p></td>
<td><p>Ignores user seed</p></td>
</tr>
<tr class="row-even"><td><p>RngUniform</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Rsqrt</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Scatter</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Select</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>ShiftRightLogical</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Sign</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Sin</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Slice</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Sqrt</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Sub</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Tanh</p></td>
<td></td>
</tr>
<tr class="row-odd"><td><p>Transpose</p></td>
<td></td>
</tr>
<tr class="row-even"><td><p>Tuple</p></td>
<td></td>
</tr>
</tbody>
</table>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code></p>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:54:59.584Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/customcxxps/gpsimd-customop-lib.rst.txt
|
```
.. _gpsimd-customop-lib-rn:
Neuron Custom C++ Library Release Notes
========================================
aws-neuronx-gpsimd-customop-lib [0.3]
-------------------------------------
Date: 04/28/2023
* Add initial support for using Multiple GPSIMD Cores for Custom C++ Operators
* Package name was changed to ``aws-neuronx-gpsimd-customop-lib``
aws-neuronx-gpsimd-customop [0.1]
---------------------------------
Date: 02/08/2023
* First release of aws-neuronx-gpsimd-customop. This release provides tensor library support required for building Neuron Custom C++ operators.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _gpsimd-customop-lib-rn:
Neuron Custom C++ Library Release Notes
========================================
aws-neuronx-gpsimd-customop-lib [0.3]
-------------------------------------
Date: 04/28/2023
* Add initial support for using Multiple GPSIMD Cores for Custom C++ Operators
* Package name was changed to ``aws-neuronx-gpsimd-customop-lib``
aws-neuronx-gpsimd-customop [0.1]
---------------------------------
Date: 02/08/2023
* First release of aws-neuronx-gpsimd-customop. This release provides tensor library support required for building Neuron Custom C++ operators.
</pre></body></html>
|
2023-09-29T20:54:59.670Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuron-sys-tools/neuron-monitor-user-guide.rst.txt
|
```
.. _neuron-monitor-ug:
Neuron Monitor User Guide
=========================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
**neuron-monitor** collects metrics and stats from the Neuron
Applications running on the system and streams the collected data to
``stdout`` in ``JSON`` format. It is provided as part of the
``aws-neuron-tools`` package.
These metrics and stats are organized into **metric groups** which can
be configured by providing a configuration file as described in :ref:`using-neuron-monitor`
When running, **neuron-monitor** will:
- Collect the data for the metric groups which, based on the elapsed
time since their last update, need to be updated
- Take the newly collected data and consolidate it into a large report
- Serialize that report to JSON and stream it to stdout from where it
can be consumed by other tools - such as the sample
`neuron-monitor-cloudwatch.py <#neuron-monitor-cloudwatchpy>`__ and
`neuron-monitor-prometheus.py <#neuron-monitor-prometheuspy>`__
scripts.
- Wait until at least one **metric group** needs to be collected and
repeat this flow
.. note::
``neuron-monitor`` fully supports the newly launched inf2 instances.
.. _using-neuron-monitor:
Using neuron-monitor
--------------------
.. _monitor_cli:
.. rubric:: neuron-monitor CLI
.. program:: neuron-monitor
.. option:: neuron-monitor [parameters]
neuron-monitor accepts the following optional parameters:
- :option:`--verbose` (int) default=0: Can be 0 to 4, and controls the amount of
debugging and verbose information sent to stderr; **0: no output**,
**4: maximum verbosity**
- :option:`-c, --config-file` (string): Allows specifying a valid path to a
neuron-monitor JSON configuration file
**Example:**
.. code-block::
neuron-monitor -c monitor.conf
Not specifying any configuration file will enable collecting all the metric groups
with a period of 5 seconds for all currently running Neuron applications.
Configuration file example
~~~~~~~~~~~~~~~~~~~~~~~~~~
Example of a configuration file which enables all available **metric
groups** for every running Neuron application, with a global update period of 1
second and sets an update period of 2 seconds for the ``"neuron_hw_counters"``
metric group:
::
{
"period": "1s",
"neuron_runtimes": [
{
"tag_filter": ".*",
"metrics": [
{
"type": "neuroncore_counters"
},
{
"type": "memory_used"
},
{
"type": "neuron_runtime_vcpu_usage"
},
{
"type": "execution_stats"
}
]
}
],
"system_metrics": [
{
"type": "vcpu_usage"
},
{
"type": "memory_info"
},
{
"period": "2s",
"type": "neuron_hw_counters"
}
]
}
Neuron applications tagging
~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to make application monitoring easier, Neuron applications can be tagged with a 255 character
string which identifies that app. Tagging is done using the ``NEURON_PROCESS_TAG`` environment variable.
For example:
``NEURON_PROCESS_TAG=my_app_1 python training.py`` will associate the ``my_app_1`` tag with that Python application.
If ``NEURON_PROCESS_TAG`` is not specified, the application's PID will be used as a TAG.
This tag will be used by neuron-monitor to filter Neuron applications.
JSON objects and fields in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ``"neuron_runtimes"`` - array of objects specifying which Neuron
Applications to monitor and what metric groups are enabled for each
of them
- ``"tag_filter"`` - a regex which will be used to filter Neuron applications tags
in order to determine if they will be monitored (optional)
- ``"metrics"`` - array of objects specifying which metric groups to
capture for this Neuron application
- ``"type"`` - type of metric group
- ``"period"`` - this field applies to **metric group** objects and
sets the amount of time between two updates for that metric group
- if can be specified as part of the **root** and/or
**neuron_runtime** objects where it applies to all their children,
and/or as part of a **metric group** object
- if there's no period specified, a default value of **5 seconds**
will be used
- ``"system_metrics"`` - array of objects specifying which system level
metric groups are enabled
Neuron Runtime-level metric groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :ref:`neuron-monitor-nc-counters` - NeuronCore related metrics
- :ref:`neuron-monitor-memory-used` - data on the amount of memory used
by the Neuron application
- :ref:`neuron-monitor-vcpu-usage` - Neuron application vCPU
utilization data
- :ref:`neuron-monitor-execution-stats` - Neuron application execution
stats, including error count and latency
System-wide metric groups
~~~~~~~~~~~~~~~~~~~~~~~~~
- :ref:`neuron-monitor-vcpu-usage` - system-wide vCPU usage
- :ref:`neuron-monitor-memory-info` - system-wide memory usage
- :ref:`neuron-monitor-hw-counters` - counters for correctable and
uncorrectable memory ecc events
Execution model
---------------
|image|
neuron-monitor waits for one or more **metric groups** to be up for
update, then collects the corresponding data, consolidates it into a
report which is streamed to stdout as a JSON and goes back to waiting.
The JSON output format
----------------------
Whenever the report gets updated, a complete JSON is written to stdout.
This is its structure:
::
{
"neuron_runtime_data": [
{
"pid": 0,
"address": "",
"neuron_runtime_tag", "my_app_1",
"error": "",
"report": {
"neuroncore_counters": {
[...]
},
"execution_stats": {
[...]
},
"memory_used": {
[...]
},
"neuron_runtime_vcpu_usage": {
[...]
}
}
}
],
"system_data": {
"neuron_hw_counters": {
[...]
},
"vcpu_usage": {
[...]
},
"memory_info": {
[...]
}
},
"instance_info": {
[...]
},
"neuron_hardware_info": {
[...]
}
}
- ``"neuron_runtime_data"`` is an array containing one entry per each
Neuron application which passes the filter specified in the settings file
- ``"pid"`` is the pid of this Neuron application
- ``"neuron_runtime_tag"`` is the configured tag for the Neuron application
- ``"error"`` specifies any error that occurred when collecting data
from this Neuron application
- ``"report"`` will contain the results for the Neuron application-level
metric groups; their formats are described below
- ``"system_data"`` has a similar structure to ``"neuron_runtime_data"``‘s
``"report"`` but only contains system-level metric groups (not
associated to any Neuron application)
Regardless of the configuration, the following two JSON objects are always present
in the output:
**instance_info**
Contains information about the instance on which neuron-monitor is running.
::
"instance_info": {
"instance_name": "My_Instance",
"instance_id": "i-0011223344556677a",
"instance_type": "inf1.xlarge",
"instance_availability_zone": "us-west-2b",
"instance_availability_zone_id": "usw2-az2",
"instance_region": "us-west-2",
"ami_id": "ami-0011223344556677b",
"subnet_id": "subnet-112233ee",
"error": ""
}
Depending on when the instance was launched, the following fields might
not be available:
- ``instance_availability_zone_id`` : available only for instances
launched in 2020-08-24 and later
- ``instance_region`` : available only for instances launched on
2020-08-24 and later
- ``instance_name`` : available only if ``instance_region`` is set and
aws-cli tools are installed
``error`` will contain an error string if getting one of the fields,
**except those mentioned above**, resulted in an error.
**neuron_hardware_info**
Contains basic information about the Neuron hardware.
::
"neuron_hardware_info": {
"neuron_device_version": "v2",
"neuroncore_version": "v2",
"neuron_device_count": 16,
"neuroncore_per_device_count": 4,
"error": ""
}
- ``neuron_device_version``: version of the Neuron Devices on the instance,
- ``neuroncore_version``: version of the NeuronCores on the instance,
- ``neuron_device_count`` : number of available Neuron Devices
- ``neuroncore_per_device_count`` : number of NeuronCores present on each Neuron Device
- ``error`` : will contain an error string if any occurred when getting this information
(usually due to the Neuron Driver not being installed or not running).
Each **metric group** requested in the settings file will get an entry
in the resulting output. The general format for such an entry is:
::
"metric_group": {
"period": 1.015, // Actual captured period, in seconds
"error": "", // Error, if any occurred, otherwise an empty string
[...] // Metric group specific data
}
.. _runtime-level-metric-groups-1:
Neuron application level metric groups
--------------------------------------
.. _neuron-monitor-nc-counters:
neuroncore_counters
~~~~~~~~~~~~~~~~~~~~~
::
"neuroncore_counters": {
"period": 1.000113182,
"neuroncores_in_use": {
"0": {
"neuroncore_utilization": 42.01,
"flops": 1234567891011
},
"1": {
"neuroncore_utilization": 42.02,
"flops": 1234567891021
},
"2": {
"neuroncore_utilization": 42.03,
"flops": 1234567891031
},
"3": {
"neuroncore_utilization": 42.04,
"flops": 1234567891041
}
},
"error": ""
}
- ``"neuroncores_in_use"`` is an object containing data for all the
NeuronCores that were active when the data was captured, indexed by
NeuronCore index: ``"neuroncore_index": { neuroncore_data }``
- ``"neuroncore_utilization"`` - NeuronCore utilization, in percent,
during the captured period
- ``"flops"`` - number of floating point operations per second during
the captured period
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-execution-stats:
execution_stats
~~~~~~~~~~~~~~~
::
"execution_stats": {
"period": 1.030613214,
"error_summary": {
"generic": 0,
"numerical": 0,
"transient": 0,
"model": 0,
"runtime": 0,
"hardware": 0
},
"execution_summary": {
"completed": 123,
"completed_with_err": 0,
"completed_with_num_err": 0,
"timed_out": 0,
"incorrect_input": 0,
"failed_to_queue": 0
},
"latency_stats": {
"total_latency": {
"p0": 0.01100001,
"p1": 0.01100002,
"p25": 0.01100004,
"p50": 0.01100008,
"p75": 0.01100010,
"p99": 0.01100012,
"p100": 0.01100013
},
"device_latency": {
"p0": 0.01000001,
"p1": 0.01000002,
"p25": 0.01000004,
"p50": 0.01000008,
"p75": 0.01000010,
"p99": 0.01000012,
"p100": 0.01000013
}
},
"error": ""
},
- ``"error_summary"`` is an object containing the error counts for the
captured period indexed by their type
- ``"generic"`` - generic execution errors
- ``"numeric"`` - NAN errors encountered during execution
- ``"transient"`` - recoverable errors, such as ECC corrections
- ``"model"`` - model-related errors
- ``"runtime"`` - Neuron Runtime errors
- ``"hardware"`` - hardware errors such as uncorrectable ECC issues
- ``"execution_summary"`` is an object containing all execution outcome
counts for the captured period indexed by their type
- ``"completed"`` - executions completed successfully
- ``"completed_with_err"`` - executions that ended in an error other
than a numeric error
- ``"completed_with_num_err"`` - executions that ended in a numeric
error
- ``"timed_out"`` - executions that took longer than the Neuron
Runtime configured timeout value
- ``"incorrect_input"`` - executions that failed to start due to
incorrect input being provided
- ``"failed_to_queue"`` - execution requests that were rejected due
to Neuron Runtime not being able to queue them
- ``"latency_stats"`` contains two objects containing latency
percentiles, in seconds, for the data captured for the model
executed during the captured period. If there are no models being
executed during this time, the two objects will be ``null`` (i.e.
``"total_latency": null``)
- ``"total_latency"`` - percentiles, in seconds, representing
latency for an execution as measured by the Neuron Runtime
- ``"device_latency"`` - percentiles, in seconds, representing execution time
exclusively on the Neuron Device
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-memory-used:
memory_used
~~~~~~~~~~~
::
"memory_used": {
"period": 1.00001,
"neuron_runtime_used_bytes": {
"host": 6997643264,
"neuron_device": 12519788544,
"usage_breakdown": {
"host": {
"application_memory": 6996594688,
"constants": 0,
"dma_buffers": 1048576,
"tensors": 0
},
"neuroncore_memory_usage": {
"0": {
"constants": 193986816,
"model_code": 176285056,
"model_shared_scratchpad": 0,
"runtime_memory": 0,
"tensors": 20971520
},
"1": {
"constants": 193986816,
"model_code": 176285056,
"model_shared_scratchpad": 0,
"runtime_memory": 0,
"tensors": 20971520
},
...
}
}
"loaded_models": [
{
"name": "neff",
"uuid": "91f2f66e83ea419dace1da07617ad39f",
"model_id": 10005,
"is_running": false,
"subgraphs": {
"sg_00": {
"memory_used_bytes": {
"host": 20480,
"neuron_device": 21001024,
"usage_breakdown": {
"host": {
"application_memory": 20480,
"constants": 0,
"dma_buffers": 0,
"tensors": 0
},
"neuron_device": {
"constants": 20971520,
"model_code": 29504,
"runtime_memory": 0,
"tensors": 0
}
}
},
"neuroncore_index": 0,
"neuron_device_index": 12
}
}
},
...
],
"error": ""
}
- ``"memory_used"`` summarizes the amount of memory used by the
Neuron application
- ``"neuron_runtime_used_bytes"`` - current amount of memory used by
the Neuron application
- ``"host"`` - total host DRAM usage in bytes
- ``"neuron_device"`` - total Neuron device memory usage in bytes
- ``"usage_breakdown"`` - a breakdown of the total memory usage in the other two fields
- ``"host"`` - breakdown of the host memory usage
- ``"application_memory"`` - amount of host memory used by the application - this includes all allocations that are not included
in the next categories
- ``"constants"`` - amount of host memory used for constants during training (or weights during inference)
- ``"dma_buffers"`` - amount of host memory used for DMA transfers
- ``"tensors"`` - amount of host memory used for tensors
- ``"neuroncore_memory_usage"`` - a breakdown of memory allocated on the Neuron Devices and the NeuronCores for which it was allocated
- ``"0"`` - ``"32"`` (for trn1-32xlarge) - NeuronCores for which the memory was allocated
- ``"constants"`` - amount of device memory used for constants during training (or weights during inference)
- ``"model_code"`` - amount of device memory used for models' executable code
- ``"model_shared_scratchpad"`` - amount of device memory used for the scratchpad shared by the models - a memory region reserved for the models'
internal variables and auxiliary buffers
- ``"runtime_memory"`` - amount of device memory used by the Neuron Runtime
- ``"tensors"`` - amount of device memory used for tensors
- ``"loaded_models"`` - array containing objects representing loaded models
- ``"name"`` - name of the model
- ``"uuid"`` - unique id for the model
- ``"model_id"`` - Neuron application-assigned ID for this model
- ``"is_running"`` - true if this model is currently started, false otherwise
- "``subgraphs"`` - object containing all the subgraphs for the model, indexed by their name: ``"subgraph_name": { subgraph_data }``
- ``"memory_used_bytes"`` - memory usage for this subgraph
- ``"host"`` - total host DRAM usage in bytes
- ``"neuron_device"`` - total Neuron device DRAM usage in bytes
- ``"usage_breakdown"`` - a breakdown of memory allocated at load time for this model
- ``"host"`` - breakdown of host memory allocated for this model
- ``"application_memory"`` - amount of host memory allocated for this model by the Neuron Runtime which doesn't fall in any
of the next categories
- ``"constants"`` - amount of host memory used for constants during training (or weights during inference)
- ``"dma_buffers"`` - host memory allocated for DMA transfers for this model
- ``"tensors"`` - amount of device memory used for tensors at model load time
- ``"neuron_device"`` - a breakdown of device memory allocated for this model
- ``"constants"`` - amount of device memory used for constants during training (or weights during inference)
- ``"model_code"`` - amount of device memory used for the model's executable code
- ``"runtime_memory"`` - amount of device memory used by the Neuron Runtime for this model
- ``"tensors"`` - amount of device memory allocated for tensors at this model's load time
- ``"neuroncore_index"`` - NeuronCore index on which the subgraph is loaded
- ``"neuron_device_index"`` - Neuron device index on which the subgraph is loaded
- ``"error"`` - string containing any error that occurred when
collecting the data
neuron_runtime_vcpu_usage
~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
"neuron_runtime_vcpu_usage": {
"period": 1.030604818,
"vcpu_usage": {
"user": 42.01,
"system": 12.34
},
"error": ""
}
- ``"vcpu_usage"`` - object showing vCPU usage in percentages for the
Neuron application during the captured period
- ``"user"`` - percentage of time spent in user code by this Neuron
Application
- ``"system"`` - percentage of time spent in kernel code by this
Neuron application
- ``"error"`` - string containing any error that occurred when
collecting the data
System level metric groups
--------------------------
.. _neuron-monitor-hw-counters:
neuron_hw_counters
~~~~~~~~~~~~~~~~~~
::
"neuron_hw_counters": {
"period": 1.030359284,
"neuron_devices": [
{
"neuron_device_index": 0,
"mem_ecc_corrected": 0,
"mem_ecc_uncorrected": 0,
"sram_ecc_uncorrected": 0,
"sram_ecc_corrected": 0
}
],
"error": ""
},
- ``"neuron_devices"`` - array containing ECC data for all Neuron devices
- ``"neuron_device_index"`` - Neuron device index
- ``"mem_ecc_corrected"`` - number of corrected ECC events in the
Neuron device’s DRAM
- ``"mem_ecc_uncorrected"`` - number of uncorrected ECC events in
the Neuron device’s DRAM
- ``"sram_ecc_uncorrected"`` - number of uncorrected ECC events in
the Neuron device’s SRAM
- ``"sram_ecc_corrected"`` - number of corrected ECC events in
the Neuron device’s SRAM
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-vcpu-usage:
vcpu_usage
~~~~~~~~~~~~
::
"vcpu_usage": {
"period": 0.999974868,
"average_usage": {
"user": 32.77,
"nice": 0,
"system": 22.87,
"idle": 39.36,
"io_wait": 0,
"irq": 0,
"soft_irq": 0
},
"usage_data": {
"0": {
"user": 34.41,
"nice": 0,
"system": 27.96,
"idle": 37.63,
"io_wait": 0,
"irq": 0,
"soft_irq": 0
},
"1": {
"user": 56.84,
"nice": 0,
"system": 28.42,
"idle": 14.74,
"io_wait": 0,
"irq": 0,
"soft_irq": 0
},
[...]
},
"context_switch_count": 123456,
"error": ""
}
- each vCPU usage object contains the following fields:
- ``"user"`` - percentage of time spent in user code
- ``"nice"`` - percentage of time spent executing niced user code
- ``"system"`` - percentage of time spent executing kernel code
- ``"idle"`` - percentage of time spent idle
- ``"io_wait"`` - percentage of time spent waiting for IO operations
- ``"irq"`` - percentage of time spent servicing hardware interrupts
- ``"soft_irq"`` - percentage of time spent servicing software
interrupts
- ``"average_usage"`` - contains the average usage across all vCPUs
during the captured period
- ``"usage_data"`` - contains per vCPU usage during the captured period
- ``"context_switch_count"`` - contains the number of vCPU context
switches during the captured period
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-memory-info:
memory_info
~~~~~~~~~~~
::
"memory_info": {
"period": 5.346411129,
"memory_total_bytes": 49345835008,
"memory_used_bytes": 16042344448,
"swap_total_bytes": 0,
"swap_used_bytes": 0,
"error": ""
}
- ``"memory_total_bytes"`` - total size of the host memory, in bytes
- ``"memory_used_bytes"`` - amount of host memory in use, in bytes
- ``"swap_total_bytes"`` - total size of the host swap file, in bytes
- ``"swap_used_bytes"`` - amount of swap memory in use, in bytes
.. _neuron-monitor-companion-scripts:
Companion scripts
-----------------
neuron-monitor is installed with two example Python companion script:
`neuron-monitor-cloudwatch.py <#neuron-monitor-cloudwatchpy>`__ and
`neuron-monitor-prometheus.py <#neuron-monitor-prometheuspy>`__.
.. _neuron-monitor-cloudwatchpy:
neuron-monitor-cloudwatch.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It requires Python3 and the `boto3 Python
module <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#quickstart>`__.
It is installed to:
``/opt/aws/neuron/bin/neuron-monitor-cloudwatch.py``.
.. _using-neuron-monitor-cloudwatchpy:
Using neuron-monitor-cloudwatch.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
neuron-monitor | neuron-monitor-cloudwatch.py --namespace <namespace> --region <region>
For example:
::
neuron-monitor | neuron-monitor-cloudwatch.py --namespace neuron_monitor_test --region us-west-2
.. _neuron-monitor-prometheuspy:
neuron-monitor-prometheus.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It requires Python3 and the `Prometheus client Python
module <https://github.com/prometheus/client_python>`__. It is installed
to: ``/opt/aws/neuron/bin/neuron-monitor-prometheus.py``.
.. _using-neuron-monitor-prometheuspy:
Using neuron-monitor-prometheus.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
neuron-monitor | neuron-monitor-prometheus.py --port <port>
For example:
::
neuron-monitor | neuron-monitor-prometheus.py --port 8008
The default value for ``--port`` is ``8000``.
If your data visualization framework is Grafana, we provided a :download:`Grafana dashboard </src/examples/neuron-monitor/neuron-monitor-grafana.json>`
which integrates with Prometheus and this script.
.. |image| image:: ../../images/nm-img2.png
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-monitor-ug:
Neuron Monitor User Guide
=========================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
**neuron-monitor** collects metrics and stats from the Neuron
Applications running on the system and streams the collected data to
``stdout`` in ``JSON`` format. It is provided as part of the
``aws-neuron-tools`` package.
These metrics and stats are organized into **metric groups** which can
be configured by providing a configuration file as described in :ref:`using-neuron-monitor`
When running, **neuron-monitor** will:
- Collect the data for the metric groups which, based on the elapsed
time since their last update, need to be updated
- Take the newly collected data and consolidate it into a large report
- Serialize that report to JSON and stream it to stdout from where it
can be consumed by other tools - such as the sample
`neuron-monitor-cloudwatch.py <#neuron-monitor-cloudwatchpy>`__ and
`neuron-monitor-prometheus.py <#neuron-monitor-prometheuspy>`__
scripts.
- Wait until at least one **metric group** needs to be collected and
repeat this flow
.. note::
``neuron-monitor`` fully supports the newly launched inf2 instances.
.. _using-neuron-monitor:
Using neuron-monitor
--------------------
.. _monitor_cli:
.. rubric:: neuron-monitor CLI
.. program:: neuron-monitor
.. option:: neuron-monitor [parameters]
neuron-monitor accepts the following optional parameters:
- :option:`--verbose` (int) default=0: Can be 0 to 4, and controls the amount of
debugging and verbose information sent to stderr; **0: no output**,
**4: maximum verbosity**
- :option:`-c, --config-file` (string): Allows specifying a valid path to a
neuron-monitor JSON configuration file
**Example:**
.. code-block::
neuron-monitor -c monitor.conf
Not specifying any configuration file will enable collecting all the metric groups
with a period of 5 seconds for all currently running Neuron applications.
Configuration file example
~~~~~~~~~~~~~~~~~~~~~~~~~~
Example of a configuration file which enables all available **metric
groups** for every running Neuron application, with a global update period of 1
second and sets an update period of 2 seconds for the ``"neuron_hw_counters"``
metric group:
::
{
"period": "1s",
"neuron_runtimes": [
{
"tag_filter": ".*",
"metrics": [
{
"type": "neuroncore_counters"
},
{
"type": "memory_used"
},
{
"type": "neuron_runtime_vcpu_usage"
},
{
"type": "execution_stats"
}
]
}
],
"system_metrics": [
{
"type": "vcpu_usage"
},
{
"type": "memory_info"
},
{
"period": "2s",
"type": "neuron_hw_counters"
}
]
}
Neuron applications tagging
~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to make application monitoring easier, Neuron applications can be tagged with a 255 character
string which identifies that app. Tagging is done using the ``NEURON_PROCESS_TAG`` environment variable.
For example:
``NEURON_PROCESS_TAG=my_app_1 python training.py`` will associate the ``my_app_1`` tag with that Python application.
If ``NEURON_PROCESS_TAG`` is not specified, the application's PID will be used as a TAG.
This tag will be used by neuron-monitor to filter Neuron applications.
JSON objects and fields in the configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ``"neuron_runtimes"`` - array of objects specifying which Neuron
Applications to monitor and what metric groups are enabled for each
of them
- ``"tag_filter"`` - a regex which will be used to filter Neuron applications tags
in order to determine if they will be monitored (optional)
- ``"metrics"`` - array of objects specifying which metric groups to
capture for this Neuron application
- ``"type"`` - type of metric group
- ``"period"`` - this field applies to **metric group** objects and
sets the amount of time between two updates for that metric group
- if can be specified as part of the **root** and/or
**neuron_runtime** objects where it applies to all their children,
and/or as part of a **metric group** object
- if there's no period specified, a default value of **5 seconds**
will be used
- ``"system_metrics"`` - array of objects specifying which system level
metric groups are enabled
Neuron Runtime-level metric groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- :ref:`neuron-monitor-nc-counters` - NeuronCore related metrics
- :ref:`neuron-monitor-memory-used` - data on the amount of memory used
by the Neuron application
- :ref:`neuron-monitor-vcpu-usage` - Neuron application vCPU
utilization data
- :ref:`neuron-monitor-execution-stats` - Neuron application execution
stats, including error count and latency
System-wide metric groups
~~~~~~~~~~~~~~~~~~~~~~~~~
- :ref:`neuron-monitor-vcpu-usage` - system-wide vCPU usage
- :ref:`neuron-monitor-memory-info` - system-wide memory usage
- :ref:`neuron-monitor-hw-counters` - counters for correctable and
uncorrectable memory ecc events
Execution model
---------------
|image|
neuron-monitor waits for one or more **metric groups** to be up for
update, then collects the corresponding data, consolidates it into a
report which is streamed to stdout as a JSON and goes back to waiting.
The JSON output format
----------------------
Whenever the report gets updated, a complete JSON is written to stdout.
This is its structure:
::
{
"neuron_runtime_data": [
{
"pid": 0,
"address": "",
"neuron_runtime_tag", "my_app_1",
"error": "",
"report": {
"neuroncore_counters": {
[...]
},
"execution_stats": {
[...]
},
"memory_used": {
[...]
},
"neuron_runtime_vcpu_usage": {
[...]
}
}
}
],
"system_data": {
"neuron_hw_counters": {
[...]
},
"vcpu_usage": {
[...]
},
"memory_info": {
[...]
}
},
"instance_info": {
[...]
},
"neuron_hardware_info": {
[...]
}
}
- ``"neuron_runtime_data"`` is an array containing one entry per each
Neuron application which passes the filter specified in the settings file
- ``"pid"`` is the pid of this Neuron application
- ``"neuron_runtime_tag"`` is the configured tag for the Neuron application
- ``"error"`` specifies any error that occurred when collecting data
from this Neuron application
- ``"report"`` will contain the results for the Neuron application-level
metric groups; their formats are described below
- ``"system_data"`` has a similar structure to ``"neuron_runtime_data"``‘s
``"report"`` but only contains system-level metric groups (not
associated to any Neuron application)
Regardless of the configuration, the following two JSON objects are always present
in the output:
**instance_info**
Contains information about the instance on which neuron-monitor is running.
::
"instance_info": {
"instance_name": "My_Instance",
"instance_id": "i-0011223344556677a",
"instance_type": "inf1.xlarge",
"instance_availability_zone": "us-west-2b",
"instance_availability_zone_id": "usw2-az2",
"instance_region": "us-west-2",
"ami_id": "ami-0011223344556677b",
"subnet_id": "subnet-112233ee",
"error": ""
}
Depending on when the instance was launched, the following fields might
not be available:
- ``instance_availability_zone_id`` : available only for instances
launched in 2020-08-24 and later
- ``instance_region`` : available only for instances launched on
2020-08-24 and later
- ``instance_name`` : available only if ``instance_region`` is set and
aws-cli tools are installed
``error`` will contain an error string if getting one of the fields,
**except those mentioned above**, resulted in an error.
**neuron_hardware_info**
Contains basic information about the Neuron hardware.
::
"neuron_hardware_info": {
"neuron_device_version": "v2",
"neuroncore_version": "v2",
"neuron_device_count": 16,
"neuroncore_per_device_count": 4,
"error": ""
}
- ``neuron_device_version``: version of the Neuron Devices on the instance,
- ``neuroncore_version``: version of the NeuronCores on the instance,
- ``neuron_device_count`` : number of available Neuron Devices
- ``neuroncore_per_device_count`` : number of NeuronCores present on each Neuron Device
- ``error`` : will contain an error string if any occurred when getting this information
(usually due to the Neuron Driver not being installed or not running).
Each **metric group** requested in the settings file will get an entry
in the resulting output. The general format for such an entry is:
::
"metric_group": {
"period": 1.015, // Actual captured period, in seconds
"error": "", // Error, if any occurred, otherwise an empty string
[...] // Metric group specific data
}
.. _runtime-level-metric-groups-1:
Neuron application level metric groups
--------------------------------------
.. _neuron-monitor-nc-counters:
neuroncore_counters
~~~~~~~~~~~~~~~~~~~~~
::
"neuroncore_counters": {
"period": 1.000113182,
"neuroncores_in_use": {
"0": {
"neuroncore_utilization": 42.01,
"flops": 1234567891011
},
"1": {
"neuroncore_utilization": 42.02,
"flops": 1234567891021
},
"2": {
"neuroncore_utilization": 42.03,
"flops": 1234567891031
},
"3": {
"neuroncore_utilization": 42.04,
"flops": 1234567891041
}
},
"error": ""
}
- ``"neuroncores_in_use"`` is an object containing data for all the
NeuronCores that were active when the data was captured, indexed by
NeuronCore index: ``"neuroncore_index": { neuroncore_data }``
- ``"neuroncore_utilization"`` - NeuronCore utilization, in percent,
during the captured period
- ``"flops"`` - number of floating point operations per second during
the captured period
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-execution-stats:
execution_stats
~~~~~~~~~~~~~~~
::
"execution_stats": {
"period": 1.030613214,
"error_summary": {
"generic": 0,
"numerical": 0,
"transient": 0,
"model": 0,
"runtime": 0,
"hardware": 0
},
"execution_summary": {
"completed": 123,
"completed_with_err": 0,
"completed_with_num_err": 0,
"timed_out": 0,
"incorrect_input": 0,
"failed_to_queue": 0
},
"latency_stats": {
"total_latency": {
"p0": 0.01100001,
"p1": 0.01100002,
"p25": 0.01100004,
"p50": 0.01100008,
"p75": 0.01100010,
"p99": 0.01100012,
"p100": 0.01100013
},
"device_latency": {
"p0": 0.01000001,
"p1": 0.01000002,
"p25": 0.01000004,
"p50": 0.01000008,
"p75": 0.01000010,
"p99": 0.01000012,
"p100": 0.01000013
}
},
"error": ""
},
- ``"error_summary"`` is an object containing the error counts for the
captured period indexed by their type
- ``"generic"`` - generic execution errors
- ``"numeric"`` - NAN errors encountered during execution
- ``"transient"`` - recoverable errors, such as ECC corrections
- ``"model"`` - model-related errors
- ``"runtime"`` - Neuron Runtime errors
- ``"hardware"`` - hardware errors such as uncorrectable ECC issues
- ``"execution_summary"`` is an object containing all execution outcome
counts for the captured period indexed by their type
- ``"completed"`` - executions completed successfully
- ``"completed_with_err"`` - executions that ended in an error other
than a numeric error
- ``"completed_with_num_err"`` - executions that ended in a numeric
error
- ``"timed_out"`` - executions that took longer than the Neuron
Runtime configured timeout value
- ``"incorrect_input"`` - executions that failed to start due to
incorrect input being provided
- ``"failed_to_queue"`` - execution requests that were rejected due
to Neuron Runtime not being able to queue them
- ``"latency_stats"`` contains two objects containing latency
percentiles, in seconds, for the data captured for the model
executed during the captured period. If there are no models being
executed during this time, the two objects will be ``null`` (i.e.
``"total_latency": null``)
- ``"total_latency"`` - percentiles, in seconds, representing
latency for an execution as measured by the Neuron Runtime
- ``"device_latency"`` - percentiles, in seconds, representing execution time
exclusively on the Neuron Device
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-memory-used:
memory_used
~~~~~~~~~~~
::
"memory_used": {
"period": 1.00001,
"neuron_runtime_used_bytes": {
"host": 6997643264,
"neuron_device": 12519788544,
"usage_breakdown": {
"host": {
"application_memory": 6996594688,
"constants": 0,
"dma_buffers": 1048576,
"tensors": 0
},
"neuroncore_memory_usage": {
"0": {
"constants": 193986816,
"model_code": 176285056,
"model_shared_scratchpad": 0,
"runtime_memory": 0,
"tensors": 20971520
},
"1": {
"constants": 193986816,
"model_code": 176285056,
"model_shared_scratchpad": 0,
"runtime_memory": 0,
"tensors": 20971520
},
...
}
}
"loaded_models": [
{
"name": "neff",
"uuid": "91f2f66e83ea419dace1da07617ad39f",
"model_id": 10005,
"is_running": false,
"subgraphs": {
"sg_00": {
"memory_used_bytes": {
"host": 20480,
"neuron_device": 21001024,
"usage_breakdown": {
"host": {
"application_memory": 20480,
"constants": 0,
"dma_buffers": 0,
"tensors": 0
},
"neuron_device": {
"constants": 20971520,
"model_code": 29504,
"runtime_memory": 0,
"tensors": 0
}
}
},
"neuroncore_index": 0,
"neuron_device_index": 12
}
}
},
...
],
"error": ""
}
- ``"memory_used"`` summarizes the amount of memory used by the
Neuron application
- ``"neuron_runtime_used_bytes"`` - current amount of memory used by
the Neuron application
- ``"host"`` - total host DRAM usage in bytes
- ``"neuron_device"`` - total Neuron device memory usage in bytes
- ``"usage_breakdown"`` - a breakdown of the total memory usage in the other two fields
- ``"host"`` - breakdown of the host memory usage
- ``"application_memory"`` - amount of host memory used by the application - this includes all allocations that are not included
in the next categories
- ``"constants"`` - amount of host memory used for constants during training (or weights during inference)
- ``"dma_buffers"`` - amount of host memory used for DMA transfers
- ``"tensors"`` - amount of host memory used for tensors
- ``"neuroncore_memory_usage"`` - a breakdown of memory allocated on the Neuron Devices and the NeuronCores for which it was allocated
- ``"0"`` - ``"32"`` (for trn1-32xlarge) - NeuronCores for which the memory was allocated
- ``"constants"`` - amount of device memory used for constants during training (or weights during inference)
- ``"model_code"`` - amount of device memory used for models' executable code
- ``"model_shared_scratchpad"`` - amount of device memory used for the scratchpad shared by the models - a memory region reserved for the models'
internal variables and auxiliary buffers
- ``"runtime_memory"`` - amount of device memory used by the Neuron Runtime
- ``"tensors"`` - amount of device memory used for tensors
- ``"loaded_models"`` - array containing objects representing loaded models
- ``"name"`` - name of the model
- ``"uuid"`` - unique id for the model
- ``"model_id"`` - Neuron application-assigned ID for this model
- ``"is_running"`` - true if this model is currently started, false otherwise
- "``subgraphs"`` - object containing all the subgraphs for the model, indexed by their name: ``"subgraph_name": { subgraph_data }``
- ``"memory_used_bytes"`` - memory usage for this subgraph
- ``"host"`` - total host DRAM usage in bytes
- ``"neuron_device"`` - total Neuron device DRAM usage in bytes
- ``"usage_breakdown"`` - a breakdown of memory allocated at load time for this model
- ``"host"`` - breakdown of host memory allocated for this model
- ``"application_memory"`` - amount of host memory allocated for this model by the Neuron Runtime which doesn't fall in any
of the next categories
- ``"constants"`` - amount of host memory used for constants during training (or weights during inference)
- ``"dma_buffers"`` - host memory allocated for DMA transfers for this model
- ``"tensors"`` - amount of device memory used for tensors at model load time
- ``"neuron_device"`` - a breakdown of device memory allocated for this model
- ``"constants"`` - amount of device memory used for constants during training (or weights during inference)
- ``"model_code"`` - amount of device memory used for the model's executable code
- ``"runtime_memory"`` - amount of device memory used by the Neuron Runtime for this model
- ``"tensors"`` - amount of device memory allocated for tensors at this model's load time
- ``"neuroncore_index"`` - NeuronCore index on which the subgraph is loaded
- ``"neuron_device_index"`` - Neuron device index on which the subgraph is loaded
- ``"error"`` - string containing any error that occurred when
collecting the data
neuron_runtime_vcpu_usage
~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
"neuron_runtime_vcpu_usage": {
"period": 1.030604818,
"vcpu_usage": {
"user": 42.01,
"system": 12.34
},
"error": ""
}
- ``"vcpu_usage"`` - object showing vCPU usage in percentages for the
Neuron application during the captured period
- ``"user"`` - percentage of time spent in user code by this Neuron
Application
- ``"system"`` - percentage of time spent in kernel code by this
Neuron application
- ``"error"`` - string containing any error that occurred when
collecting the data
System level metric groups
--------------------------
.. _neuron-monitor-hw-counters:
neuron_hw_counters
~~~~~~~~~~~~~~~~~~
::
"neuron_hw_counters": {
"period": 1.030359284,
"neuron_devices": [
{
"neuron_device_index": 0,
"mem_ecc_corrected": 0,
"mem_ecc_uncorrected": 0,
"sram_ecc_uncorrected": 0,
"sram_ecc_corrected": 0
}
],
"error": ""
},
- ``"neuron_devices"`` - array containing ECC data for all Neuron devices
- ``"neuron_device_index"`` - Neuron device index
- ``"mem_ecc_corrected"`` - number of corrected ECC events in the
Neuron device’s DRAM
- ``"mem_ecc_uncorrected"`` - number of uncorrected ECC events in
the Neuron device’s DRAM
- ``"sram_ecc_uncorrected"`` - number of uncorrected ECC events in
the Neuron device’s SRAM
- ``"sram_ecc_corrected"`` - number of corrected ECC events in
the Neuron device’s SRAM
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-vcpu-usage:
vcpu_usage
~~~~~~~~~~~~
::
"vcpu_usage": {
"period": 0.999974868,
"average_usage": {
"user": 32.77,
"nice": 0,
"system": 22.87,
"idle": 39.36,
"io_wait": 0,
"irq": 0,
"soft_irq": 0
},
"usage_data": {
"0": {
"user": 34.41,
"nice": 0,
"system": 27.96,
"idle": 37.63,
"io_wait": 0,
"irq": 0,
"soft_irq": 0
},
"1": {
"user": 56.84,
"nice": 0,
"system": 28.42,
"idle": 14.74,
"io_wait": 0,
"irq": 0,
"soft_irq": 0
},
[...]
},
"context_switch_count": 123456,
"error": ""
}
- each vCPU usage object contains the following fields:
- ``"user"`` - percentage of time spent in user code
- ``"nice"`` - percentage of time spent executing niced user code
- ``"system"`` - percentage of time spent executing kernel code
- ``"idle"`` - percentage of time spent idle
- ``"io_wait"`` - percentage of time spent waiting for IO operations
- ``"irq"`` - percentage of time spent servicing hardware interrupts
- ``"soft_irq"`` - percentage of time spent servicing software
interrupts
- ``"average_usage"`` - contains the average usage across all vCPUs
during the captured period
- ``"usage_data"`` - contains per vCPU usage during the captured period
- ``"context_switch_count"`` - contains the number of vCPU context
switches during the captured period
- ``"error"`` - string containing any error that occurred when
collecting the data
.. _neuron-monitor-memory-info:
memory_info
~~~~~~~~~~~
::
"memory_info": {
"period": 5.346411129,
"memory_total_bytes": 49345835008,
"memory_used_bytes": 16042344448,
"swap_total_bytes": 0,
"swap_used_bytes": 0,
"error": ""
}
- ``"memory_total_bytes"`` - total size of the host memory, in bytes
- ``"memory_used_bytes"`` - amount of host memory in use, in bytes
- ``"swap_total_bytes"`` - total size of the host swap file, in bytes
- ``"swap_used_bytes"`` - amount of swap memory in use, in bytes
.. _neuron-monitor-companion-scripts:
Companion scripts
-----------------
neuron-monitor is installed with two example Python companion script:
`neuron-monitor-cloudwatch.py <#neuron-monitor-cloudwatchpy>`__ and
`neuron-monitor-prometheus.py <#neuron-monitor-prometheuspy>`__.
.. _neuron-monitor-cloudwatchpy:
neuron-monitor-cloudwatch.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It requires Python3 and the `boto3 Python
module <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#quickstart>`__.
It is installed to:
``/opt/aws/neuron/bin/neuron-monitor-cloudwatch.py``.
.. _using-neuron-monitor-cloudwatchpy:
Using neuron-monitor-cloudwatch.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
neuron-monitor | neuron-monitor-cloudwatch.py --namespace <namespace> --region <region>
For example:
::
neuron-monitor | neuron-monitor-cloudwatch.py --namespace neuron_monitor_test --region us-west-2
.. _neuron-monitor-prometheuspy:
neuron-monitor-prometheus.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It requires Python3 and the `Prometheus client Python
module <https://github.com/prometheus/client_python>`__. It is installed
to: ``/opt/aws/neuron/bin/neuron-monitor-prometheus.py``.
.. _using-neuron-monitor-prometheuspy:
Using neuron-monitor-prometheus.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
neuron-monitor | neuron-monitor-prometheus.py --port <port>
For example:
::
neuron-monitor | neuron-monitor-prometheus.py --port 8008
The default value for ``--port`` is ``8000``.
If your data visualization framework is Grafana, we provided a :download:`Grafana dashboard </src/examples/neuron-monitor/neuron-monitor-grafana.json>`
which integrates with Prometheus and this script.
.. |image| image:: ../../images/nm-img2.png
</pre></body></html>
|
2023-09-29T20:54:59.685Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/index.rst.txt
|
```
.. _neuron-tools:
Neuron Tools
============
Neuron provides debugging and profiling tools with the visualization support of the TensorBoard plugin. The Neuron helper tools assist in best practices for model onboarding and performance optimizations. The debugging and profiling tools provide monitoring of runtime and performance metrics insights.
.. toctree::
:maxdepth: 1
:hidden:
/tools/neuron-sys-tools/index
.. dropdown:: System Tools
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron-monitor-ug`
* :ref:`neuron-top-ug`
* :ref:`neuron-ls-ug`
* :ref:`neuron-profile-ug`
* :ref:`neuron-sysfs-ug`
* :ref:`nccom-test`
* :ref:`What's New <neuron-tools-rn>`
.. toctree::
:maxdepth: 1
:hidden:
/tools/tensorboard/index
.. dropdown:: TensorBoard Plugin for Neuron
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuronx-plugin-tensorboard`
* :ref:`neuron-plugin-tensorboard`
* :ref:`What's New <neuron-tensorboard-rn>`
.. toctree::
:maxdepth: 1
:hidden:
/tools/helper-tools/index
.. dropdown:: Helper Tools
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron_check_model`
* :ref:`neuron_gatherinfo`
.. toctree::
:maxdepth: 1
:hidden:
/tools/neuronperf/index
.. dropdown:: Performance and Benchmarks Tools
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuronperf`
* :ref:`nccom-test`
.. dropdown:: Tutorials
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. tab-set::
.. tab-item:: TensorBoard
* :ref:`neuronx-plugin-tensorboard`
* :ref:`tb_track_training_minst`
* :ref:`torch-neuronx-profiling-with-tb`
.. tab-item:: System Tools
* :ref:`track-system-monitor`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-tools:
Neuron Tools
============
Neuron provides debugging and profiling tools with the visualization support of the TensorBoard plugin. The Neuron helper tools assist in best practices for model onboarding and performance optimizations. The debugging and profiling tools provide monitoring of runtime and performance metrics insights.
.. toctree::
:maxdepth: 1
:hidden:
/tools/neuron-sys-tools/index
.. dropdown:: System Tools
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron-monitor-ug`
* :ref:`neuron-top-ug`
* :ref:`neuron-ls-ug`
* :ref:`neuron-profile-ug`
* :ref:`neuron-sysfs-ug`
* :ref:`nccom-test`
* :ref:`What's New <neuron-tools-rn>`
.. toctree::
:maxdepth: 1
:hidden:
/tools/tensorboard/index
.. dropdown:: TensorBoard Plugin for Neuron
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuronx-plugin-tensorboard`
* :ref:`neuron-plugin-tensorboard`
* :ref:`What's New <neuron-tensorboard-rn>`
.. toctree::
:maxdepth: 1
:hidden:
/tools/helper-tools/index
.. dropdown:: Helper Tools
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuron_check_model`
* :ref:`neuron_gatherinfo`
.. toctree::
:maxdepth: 1
:hidden:
/tools/neuronperf/index
.. dropdown:: Performance and Benchmarks Tools
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
* :ref:`neuronperf`
* :ref:`nccom-test`
.. dropdown:: Tutorials
:class-title: sphinx-design-class-title-med
:class-body: sphinx-design-class-body-small
:animate: fade-in
.. tab-set::
.. tab-item:: TensorBoard
* :ref:`neuronx-plugin-tensorboard`
* :ref:`tb_track_training_minst`
* :ref:`torch-neuronx-profiling-with-tb`
.. tab-item:: System Tools
* :ref:`track-system-monitor`
</pre></body></html>
|
2023-09-29T20:54:59.696Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuron-sys-tools/index.rst.txt
|
```
System Tools
============
.. toctree::
:maxdepth: 1
Neuron-Monitor User Guide </tools/neuron-sys-tools/neuron-monitor-user-guide>
Neuron-Top User Guide </tools/neuron-sys-tools/neuron-top-user-guide>
Neuron-LS User Guide </tools/neuron-sys-tools/neuron-ls>
Neuron Profile User Guide </tools/neuron-sys-tools/neuron-profile-user-guide>
Neuron-Sysfs User Guide </tools/neuron-sys-tools/neuron-sysfs-user-guide>
NCCOM-TEST User Guide </tools/neuron-sys-tools/nccom-test>
What's New </release-notes/tools/aws-neuronx-tools>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">System Tools
============
.. toctree::
:maxdepth: 1
Neuron-Monitor User Guide </tools/neuron-sys-tools/neuron-monitor-user-guide>
Neuron-Top User Guide </tools/neuron-sys-tools/neuron-top-user-guide>
Neuron-LS User Guide </tools/neuron-sys-tools/neuron-ls>
Neuron Profile User Guide </tools/neuron-sys-tools/neuron-profile-user-guide>
Neuron-Sysfs User Guide </tools/neuron-sys-tools/neuron-sysfs-user-guide>
NCCOM-TEST User Guide </tools/neuron-sys-tools/nccom-test>
What's New </release-notes/tools/aws-neuronx-tools>
</pre></body></html>
|
2023-09-29T20:54:59.773Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/neuron-customops/api-reference-guide/custom-ops-ref-guide.rst.txt
|
```
.. _custom-ops-api-ref-guide:
Custom Operators API Reference Guide [Experimental]
===================================================
This page provides the documentation for the C++ API available to creators of Neuron custom C++ operators (see :ref:`neuron_c++customops`).
.. contents:: Table of contents
:local:
:depth: 1
Tensor Library
--------------
The tensor library used for Neuron custom C++ operators is based upon the PyTorch ATen tensor library. This includes the core Tensor class as well as select operations defined below. Users need to include the ``<torch/torch.h>`` header to access the tensor library. A small example of using the tensor library looks as follows.
.. code-block:: c++
#include <torch/torch.h>
...
torch::Tensor a = torch::zeros({32, 32, 3}, torch::kFloat);
Tensor Factory Functions
^^^^^^^^^^^^^^^^^^^^^^^^
The tensor factory functions provide different means for creating new tensors.
They each take in a ``size`` argument that specifies the size of each dimension of the tensor created (with the exception of ``eye``, which takes in two int64's and creates a strictly 2-dimensional identity matrix.)
``c10::TensorOptions`` allows the specification of optional properties for the tensor being created. Currently, only the ``dtype`` property has an effect on tensor construction, and it must be specified. Other properties, such as ``layout`` may be supported in the future.
The example above shows a common way to use factory functions.
The following dtypes are supported:
* torch::kFloat
* torch::kBFloat16
* torch::kHalf
* torch::kInt
* torch::kChar
* torch::kLong
* torch::kShort
* torch::kByte
.. cpp:function:: torch::Tensor empty(torch::IntArrayRef size, c10::TensorOptions options)
Creates a tensor filled with uninitialized data, with the specified size and options. Slightly faster than other factory functions since it skips writing data to the tensor.
.. cpp:function:: torch::Tensor full(torch::IntArrayRef size, const Scalar & fill_value, c10::TensorOptions options)
Creates a tensor filled with the specified ``fill_value``, with the specified size and options.
.. cpp:function:: torch::Tensor zeros(torch::IntArrayRef size, c10::TensorOptions options)
Creates a tensor filled with zeros, with the specified size and options.
.. cpp:function:: torch::Tensor ones(torch::IntArrayRef size, c10::TensorOptions options)
Creates a tensor filled with ones, with the specified size and options.
.. cpp:function:: torch::Tensor eye(int64_t n, int64_t m, c10::TensorOptions options)
Creates a 2-D tensor with ones on the diagonal and zeros elsewhere.
Tensor Operation Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The tensor library provides commonly used operations defined below. The tensor operation functions do not support broadcasting; the shape of the operands must match if applicable.
The library provides two styles of functions for each tensor operation. For functions ending with ``_out``, a tensor with the proper size must be provided to which the output is written. This is illustrated in the example below.
.. code-block:: c++
torch::exp_out(t_out, t_in);
Alternatively, for functions that do not end in ``_out``, a new tensor that contains the results of the operation is allocated and returned as seen in the example below.
.. code-block:: c++
torch::Tensor t_out = torch::exp(t_in);
.. warning::
Only operations that are documented below are supported.
.. cpp:function:: torch::Tensor& abs_out(torch::Tensor &result, torch::Tensor &self)
.. cpp:function:: torch::Tensor abs(torch::Tensor& self)
Computes the absolute value of each element in ``self``.
.. cpp:function:: torch::Tensor& ceil_out(torch::Tensor &result, torch::Tensor &self)
.. cpp:function:: torch::Tensor ceil(torch::Tensor &self)
Computes the ceiling of the elements of ``self``, the smallest integer greater than or equal to each element.
.. cpp:function:: torch::Tensor& floor_out(torch::Tensor& result, torch::Tensor &self)
.. cpp:function:: torch::Tensor floor(torch::Tensor &self)
Computes the floor of the elements of ``self``, the largest integer less than or equal to each element.
.. cpp:function:: torch::Tensor& sin_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor sin(torch::Tensor& self)
Computes the sine value of the elements of ``self``.
.. cpp:function:: torch::Tensor& cos_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor cos(torch::Tensor& self)
Computes the cosine value of the elements of ``self``.
.. cpp:function:: torch::Tensor& tan_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor tan(torch::Tensor& self)
Computes the tangent value of the elements of ``self``.
.. cpp:function:: torch::Tensor& log_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor log(torch::Tensor& self)
Computes the natural logarithm of the elements of ``self``.
.. cpp:function:: torch::Tensor& log2_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor log2(torch::Tensor& self)
Computes the base-2 logarithm of the elements of ``self``.
.. cpp:function:: torch::Tensor& log10_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor log10(torch::Tensor& self)
Computes the base-10 logarithm of the elements of ``self``.
.. cpp:function:: torch::Tensor& exp_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor exp(torch::Tensor& self)
Computes the exponential of the elements of ``self``.
.. cpp:function:: torch::Tensor& pow_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar & exponent)
.. cpp:function:: torch::Tensor& pow_out(torch::Tensor& result, const torch::Scalar& self, const torch::Tensor & exponent)
.. cpp:function:: torch::Tensor& pow_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor & exponent)
.. cpp:function:: torch::Tensor pow(const torch::Tensor& self, const torch::Scalar & exponent)
.. cpp:function:: torch::Tensor pow(const torch::Scalar& self, const torch::Tensor & exponent)
.. cpp:function:: torch::Tensor pow(const torch::Tensor& self, const torch::Tensor & exponent)
Takes the power of each element in ``self`` with ``exponent``.
.. cpp:function:: torch::Tensor& clamp_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar& minval, const torch::Scalar& maxval)
.. cpp:function:: torch::Tensor clamp(const torch::Tensor& self, const torch::Scalar& minval, const torch::Scalar& maxval)
Clamps all elements in ``self`` into the range ``[minval, maxval]``.
.. cpp:function:: torch::Tensor& add_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor& add_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor add(const torch::Tensor& self, const torch::Scalar &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor add(const torch::Tensor& self, const torch::Tensor &other, const torch::Scalar& alpha=1)
Adds ``other``, scaled by ``alpha``, to ``input``,
.. math::
out = self + alpha \times other.
.. cpp:function:: torch::Tensor& sub_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor& sub_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor sub(const torch::Tensor& self, const torch::Tensor &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor sub(const torch::Tensor& self, const torch::Scalar& other, const torch::Scalar& alpha=1)
Subtracts ``other``, scaled by ``alpha``, to ``input``,
.. math::
out = self - alpha \times other.
.. cpp:function:: torch::Tensor& mul_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor& mul_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor mul(const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor mul(const torch::Tensor& self, const torch::Tensor &other)
Multiplies ``self`` by ``other``.
.. cpp:function:: torch::Tensor& div_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor& div_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor div(const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor div(const torch::Tensor& self, const torch::Tensor &other)
Divides ``self`` by ``other``.
.. note::
For tensor-tensor bitwise operations, all the bitwise operations are elementwise between two tensors. For scalar-tensor bitwise operations, the scalar is casted to the datatype of the tensor before computing the bitwise operation.
.. cpp:function:: torch::Tensor& bitwise_and_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor& bitwise_and_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor& bitwise_and_out(torch::Tensor& result, const torch::Scalar& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_and(const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_and(const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor bitwise_and(const torch::Scalar& self, const torch::Tensor& other)
Computes the bitwise AND of ``self`` and ``other``. The input tensors must be of integral types.
.. cpp:function:: torch::Tensor& bitwise_or_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor& bitwise_or_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor& bitwise_or_out(torch::Tensor& result, const torch::Scalar& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_or(const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_or(const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor bitwise_or(const torch::Scalar& self, const torch::Tensor& other)
Computes the bitwise OR of ``self`` and ``other``. The input tensors must be of integral types.
.. cpp:function:: torch::Tensor& bitwise_not_out(torch::Tensor& result, const torch::Tensor& self)
.. cpp:function:: torch::Tensor bitwise_not(torch::Tensor& result, const torch::Tensor& self)
Computes the bitwise NOT of ``self``. The input tensor must be of integral types.
Class torch::Tensor
^^^^^^^^^^^^^^^^^^^
Constructors
""""""""""""
Users should not call the Tensor constructor directly but instead use one of the Tensor factory functions.
Member Functions
""""""""""""""""
.. cpp:function:: template<typename T, size_t N> TensorAccessor<T,N,true> accessor() const&
Return a ``TensorAccessor`` for element-wise random access of a Tensor's elements. Scalar type and dimension template parameters must be specified. This const-qualified overload returns a read-only ``TensorAccessor``, preventing the user from writing to Tensor elements. See the Tensor Accessors section below for more details.
.. cpp:function:: template<typename T, size_t N> TensorAccessor<T,N,false> accessor() &
Return a ``TensorAccessor`` for element-wise random access of a Tensor's elements. Scalar type and dimension template parameters must be specified. This non-const-qualified overload returns a ``TensorAccessor`` that can be used to both read and write to Tensor elements. See the Tensor Accessors section below for more details.
.. cpp:function:: template<typename T> TensorReadStreamAccessor<T> read_stream_accessor() const&
Opens a streaming accessor for read on a tensor. Template parameter ``T`` is the scalar type of the tensor data. See Streaming Accessors section below for more details.
.. cpp:function:: template<typename T> TensorWriteStreamAccessor<T> write_stream_accessor() &
Opens a streaming accessor for write on a tensor. Template parameter ``T`` is the scalar type of the tensor data. See Streaming Accessors section below for more details.
.. cpp:function:: CoherencyEnforcer::Policy get_accessor_coherence_policy() const
Get the Tensor accessor coherence policy. See Coherence section below for more details.
.. cpp:function:: void set_accessor_coherence_policy(CoherencyEnforcer::Policy policy) const
Set the Tensor accessor coherence policy. See Coherence section below for more details.
.. cpp:function:: TensorTcmAccessor<true> tcm_accessor() const&
Opens a TCM accessor on a tensor. This const-qualified overload returns a read-only ``TensorTcmAccessor``, preventing the user from writing to Tensor elements. See TCM Accessor section below for more details.
.. cpp:function:: TensorTcmAccessor<false> tcm_accessor() &
Opens a TCM accessor on a tensor. This non-const-qualified overload returns a ``TensorTcmAccessor`` that can be used to both read and write to Tensor elements. See TCM Accessor section below for more details.
.. cpp:function:: torch::Tensor& fill_(const torch::Scalar & value) const
Fill a tensor with the specified value.
Tensor Operators
""""""""""""""""
.. cpp:function:: Tensor& operator=(const Tensor &x) &
.. cpp:function:: Tensor& operator=(Tensor &&x) &
Assignment operators
Tensor Accessors
----------------
The standard tensor accessor provides element-wise random access to ``Tensor`` elements. They can be created by calling ``Tensor::accessor()``. It can be used similarly to the Pytorch ATen version (see https://pytorch.org/cppdocs/notes/tensor_basics.html#cpu-accessors). However, it is not as fast as other methods of accessing a ``Tensor``, such as the streaming accessor or TCM accessor.
Example Usage
^^^^^^^^^^^^^
Element-wise add of two 1D tensors using ``TensorAccessor``.
.. code-block:: c++
torch::Tensor tensor_add_compute(const torch::Tensor& t1, const torch::Tensor& t2) {
size_t num_elem = t1.numel();
assert(t1.sizes() == t2.sizes());
torch::Tensor t_out = torch::empty({num_elem}, torch::kFloat);
auto t1_acc = t1.accessor<float, 1>();
auto t2_acc = t2.accessor<float, 1>();
auto t_out_acc = t_out.accessor<float, 1>();
for (size_t i = 0; i < num_elem; i++) {
t_out_acc[i] = t1_acc[i] + t2_acc[i];
}
return t_out;
}
.. _custom-ops-ref-guide-mem-arch:
Memory Architecture
^^^^^^^^^^^^^^^^^^^
Tensor data is stored in NeuronCore memory. The various types of accessors enable users to access tensor data from their custom C++ operator code running on the GPSIMD engine.
.. image:: /neuron-customops/images/ncorev2_gpsimd_memory.png
:width: 600
Streaming Accessors
-------------------
Streaming accessors provide the user the ability to access ``Tensor`` elements in sequential order, faster than the standard tensor accessor. There are two stream accessor classes, one for reading and one for writing. Users should not construct stream accessors directly, but should get them from a ``Tensor`` using ``Tensor::read_stream_accessor`` and ``Tensor::write_stream_accessor()``.
An active stream accessor is defined as a stream accessor that has been instantiated and not yet closed (via the ``close()`` method or by going out-of-scope).
The user is responsible for managing stream accessors concurrently accessing the same ``Tensor``. For safest usage, no stream accessor should be active while there is an active ``TensorWriteStreamAccessor`` on the same ``Tensor``. The user may either have multiple ``TensorReadStreamAccessors`` active on the same ``Tensor``, or only have a single ``TensorWriteStreamAccessor`` active on that ``Tensor``. Stream accessors should not be used concurrently with standard tensor accessors on the same ``Tensor``.
An unlimited number of active stream accessors (in total, across all ``Tensors``) are functionally supported, but only up to 4 active stream accessors will be performant. Additional stream accessors beyond the 4th will have performance similar to that of a standard tensor accessor.
Example Usage
^^^^^^^^^^^^^
Element-wise add of two tensors using ``TensorWriteStreamAccessor`` and ``TensorWriteStreamAccessor``.
.. code-block:: c++
torch::Tensor tensor_add_compute(const torch::Tensor& t1, const torch::Tensor& t2) {
assert(t1.sizes() == t2.sizes());
torch::Tensor t_out = torch::empty(t1.sizes(), torch::kFloat);
auto t1_rd_stm_acc = t1.read_stream_accessor<float>();
auto t2_rd_stm_acc = t2.read_stream_accessor<float>();
auto t_out_wr_stm_acc = t_out.write_stream_accessor<float>();
for (int i = 0; i < t1.numel(); i++) {
auto sum = t1_rd_stm_acc.read() + t2_rd_stm_acc.read();
t_out_wr_stm_acc.write(sum);
}
return t_out;
}
Class torch::TensorWriteStreamAccessor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:class:: template<typename T> class TensorReadStreamAccessor
The class template parameter ``T`` is the scalar type of the tensor data.
Member Functions
""""""""""""""""
.. cpp:function:: T read()
Reads from next element in the stream. User is responsible for knowing when to stop reading from ``TensorReadStreamAccessor``. Reading past the end of the stream or on a closed stream results in undefined behaviour.
.. cpp:function:: int close()
Closes stream. Do not read from the stream after calling ``close()``.
Class torch::TensorWriteStreamAccessor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:class:: template<typename T> class torch::TensorWriteStreamAccessor
The class template parameter ``T`` is the scalar type of the tensor data.
Member Functions
""""""""""""""""
.. cpp:function:: void write(T value)
Writes to next element in the stream. Written value is not guaranteed to be written back to the Tensor's memory until the ``TensorWriteStreamAccessor`` goes out of scope, or the user explicitly calls ``close()``. User is responsible for knowing when to stop writing to a stream accessor. Writing past the end of the stream or on a closed stream results in undefined behaviour.
.. cpp:function:: int close()
Closes stream. Flushes write data to the ``Tensor``'s memory. Do not write to the stream after calling ``close()``.
Coherence
^^^^^^^^^
Stream accessors cache ``Tensor`` data in GPSIMD tightly-coupled memory (TCM), but do not ensure their caches remain coherent. When exactly they read from or write back to NeuronCore memory is opaque to the user (except for ``close()`` which forces a write back).
The safest way to use them is to ensure that no stream accessor is active (instantiated and not yet closed) while there is an active write stream accessor on the same ``Tensor``. The user should either have multiple read stream accessors active on the same ``Tensor``, or only have a single write stream accessor active on that ``Tensor``.
The standard tensor accessors read/write NeuronCore memory directly. Therefore, tensor accessors can safely concurrently access the same ``Tensor``, but it is safest not to use them concurrently with stream accessors since NeuronCore memory isn't guaranteed to be coherent with the stream accessor caches.
These coarse-grained guidelines are best practices, but it is possible to ignore them with careful usage of the accessors (making sure elements are read before they are written to, elements written to are written back before being read again, etc).
The coherence policy of a ``Tensor`` determines what to do when there is potentially incoherent access by an accessor of that ``Tensor``. It can either cause an error, or allow it but print a warning, or do nothing. In the case of the latter two options, it is the user's responsibility to ensure they carefully use accessors coherently. Coherence policy for ``Tensors`` is ``torch::CoherencyEnforcer::Policy::COHERENT`` by default, but can be changed using ``Tensor::set_accessor_coherence_policy()``.
.. code-block:: c++
// class torch::CoherencyEnforcer
enum Policy {
// Enforce a resource is acquired in a way that guarantees coherence
// Causes an error if it encounters potentially incoherent access
COHERENT,
// Allows potentially incoherent access, but will print a warning
INCOHERENT_VERBOSE,
// Allows potentially incoherent access, no error or warnings
INCOHERENT_QUIET
};
TCM Accessor
------------
TCM accessors provide the fastest read and write performance. TCM accessors allow the user to manually manage copying data between larger, but slower-access NeuronCore memory to faster GPSIMD tightly-coupled memory (TCM). It may be beneficial to see the diagram under :ref:`custom-ops-ref-guide-mem-arch`. Create a ``TensorTcmAccessor`` from a ``Tensor`` by calling ``Tensor::tcm_accessor()``. Users can allocate and free TCM memory using ``tcm_malloc()`` and ``tcm_free()``. Users have access to a 16KB pool of TCM memory. Note the streaming accessors also allocate from this pool (4KB each). TCM accessors do not do any coherence checks.
.. note::
See :ref:`neuronx-customop-mlp-perf` for a tutorial on how to use TCM accessors.
Example Usage
^^^^^^^^^^^^^
Element-wise negate of a tensor using ``TensorTcmAccessor``.
.. code-block:: c++
torch::Tensor tensor_negate_compute(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::empty(t_in.sizes(), torch::kFloat);
static constexpr size_t buffer_size = 1024;
float *tcm_buffer = (float *)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
if (tcm_buffer != nullptr) {
// tcm_malloc allocated successfully, use TensorTcmAccessor
auto t_in_tcm_acc = t_in.tcm_accessor();
auto t_out_tcm_acc = t_out.tcm_accessor();
for (size_t i = 0; i < num_elem; i += buffer_size) {
size_t remaining_elem = num_elem - i;
size_t copy_size = (remaining_elem > buffer_size) ? buffer_size : remaining_elem;
t_in_tcm_acc.tensor_to_tcm<float>(tcm_buffer, i, copy_size);
for (size_t j = 0; j < copy_size; j++) {
tcm_buffer[j] *= -1;
}
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, i, copy_size);
}
torch::neuron::tcm_free(tcm_buffer);
} else {
// Handle not enough memory...
}
return t_out;
}
TCM Management Functions
^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:function:: void * torch::neuron::tcm_malloc(size_t nbytes)
Allocate ``nbytes`` bytes of memory from TCM and return pointer to this memory. Upon failure, returns null.
.. cpp:function:: void torch::neuron::tcm_free(void * ptr)
Free memory that was allocated by ``tcm_malloc()``. Undefined behaviour if ``ptr`` was not returned from a previous call to ``tcm_malloc()``.
Class torch::TensorTcmAccessor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:class:: template<bool read_only> class torch::TensorTcmAccessor
The ``read_only`` template parameter controls whether or not you can write to the accessor's ``Tensor``. A ``const Tensor`` will return a read-only ``TensorTcmAccessor`` from ``Tensor::tcm_accessor()``.
Member Functions
""""""""""""""""
.. cpp:function:: template<typename T> void tensor_to_tcm(T * tcm_ptr, size_t tensor_offset, size_t num_elem)
Copy ``num_elem`` elements from the accessor's ``Tensor`` starting at the index ``tensor_offset`` to a TCM buffer starting at ``tcm_ptr``. Tensor indexing is performed as if the tensor was flattened. Template parameter ``T`` is the scalar type of the tensor data. The TCM buffer's size should be at least ``sizeof(T) * num_elem`` bytes.
.. cpp:function:: template<typename T> void tcm_to_tensor(T * tcm_ptr, size_t tensor_offset, size_t num_elem)
Copy ``num_elem`` elements from a TCM buffer starting at ``tcm_ptr`` to the accessor's ``Tensor`` starting at the index ``tensor_offset``. Tensor indexing is performed as if the tensor was flattened. The TCM buffer's size should be at least ``sizeof(T) * num_elem`` bytes.
Writing Directly to Output Tensor
---------------------------------
.. cpp:function:: torch::Tensor get_dst_tensor()
Returns a reference to the Custom C++ operator output tensor (return value). If this method is called, it is assumed that data will be written to this output tensor, and the tensor returned from the C++ operator will be ignored. Using this method will improve performance by avoiding additional copying of the return value. See example below for function usage.
.. code-block:: c++
:emphasize-lines: 4, 12
// Example of write to get_dst_tensor()
torch::Tensor example_kernel(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = get_dst_tensor();
auto t_out_tcm_acc = t_out.tcm_accessor();
float *tcm_buffer = (float *)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
// Populate tcm_buffer with results
...
// Write to t_out throught tcm_accessor
t_out_acc.tcm_to_tensor<float>(tcm_buffer, offset, copy_size);
...
}
Using multiple GPSIMD cores
---------------------------
.. note::
See :ref:`neuronx-customop-mlp-perf` for a tutorial on how to use multiple GPSIMD cores to execute the Custom C++ Operator
By default, Custom C++ operators target a single core of the GPSIMD-Engine. Performance of Custom C++ operators can be improved by targeting multiple cores. To enable usage of multiple GPSIMD cores, ``multicore=True`` should be passed to ``custom_op.load()``.
.. code-block:: python
:emphasize-lines: 6
custom_op.load(
name=name,
compute_srcs=compute_srcs,
shape_srcs=shape_srcs,
build_directory=os.getcwd(),
multicore=True
)
Each GPSIMD core executes the same kernel function. The user can control the execution on each core by conditioning the Custom C++ operator logic on the core id (obtained via ``get_cpu_id()`` API). This is illustrated in the example below.
The following functions are defined in ``neuron/neuron-utils.hpp``
.. cpp:function:: uint32_t get_cpu_id()
Return the id of the core that the Custom C++ operator is executing on, id is in range ``[0, get_cpu_count())``
.. cpp:function:: uint32_t get_cpu_count()
Return the total number of available GPSIMD cores.
.. code-block:: c++
:emphasize-lines: 5, 6, 15
torch::Tensor example_kernel(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = get_dst_tensor();
uint32_t cpu_id = get_cpu_id();
uint32_t cpu_count = get_cpu_count();
uint32_t partition = num_elem / cpu_count;
float *tcm_buffer = (float *)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
// Populate tcm_buffer with desired results
...
// Write to t_out with a offset computed from cpu_id and cpu_count
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, partition*cpu_id, copy_size);
...
}
Return Value Handling
^^^^^^^^^^^^^^^^^^^^^
When using multiple GPSIMD cores, the ``get_dst_tensor()`` API must be used to write the return value of the Custom C++ operators. Data not written to the tensor reference returned by ``get_dst_tensor()``, or not invoking ``get_dst_tensor()`` will result in undefined behavior. The user is responsible for writing the appropriate portion of the output reference tensor from a given GPSIMD core. Since there is no synchronization between GPSIMD cores, it is advised that each GPSIMD core writes to a mutually exclusive partition of the output reference tensor.
printf()
--------------
Custom C++ operators support the use of C++'s ``printf()`` to send information to the host's terminal. Using ``printf()`` is the recommended approach to functional debug. With it, the programmer can check the value of inputs, outputs, intermediate values, and control flow within their operator.
Usage
^^^^^
To use ``printf()`` within a Custom C++ operator, the programmer must set the following environment variables before running their model in order to receive the messages printed by their operator:
.. list-table:: Environment Variables
:widths: 50 200 20 200 200
:header-rows: 1
* - Name
- Description
- Type
- Value to Enable printf
- Default Value
* - ``NEURON_RT_LOG_LEVEL``
- Runtime log verbose level
- String
- At least ``INFO``
- See (https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-runtime/nrt-configurable-parameters.html?highlight=NEURON_RT_LOG_LEVEL#neuron-runtime-configuration) for more options.
* - ``NEURON_RT_GPSIMD_STDOUT_QUEUE_SIZE_BYTES``
- Size of the printf output buffer, in bytes
- Integer
- Any power of two that is equal to or less than ``2097152`` (2MB)
- Recommend setting a value of ``2097152`` to maximize the size of printf's buffer. Setting a value of 0 disables printf.
Within a Custom C++ operator, ``printf()`` can be used as normal from within a C++ program. For more information, consult a reference such as (https://cplusplus.com/reference/cstdio/printf/)
Example
^^^^^^^
.. code-block:: c++
#include <torch/torch.h>
#include <stdio.h> // Contains printf()
torch::Tensor tensor_negate_compute(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
auto t_in_acc = t_in.accessor<float, 1>();
auto t_out_acc = t_out.accessor<float, 1>();
for (size_t i = 0; i < num_elem; i++) {
float tmp = -1 * t_in_acc[i];
printf("Assigning element %d to a value of %f\n", i, tmp);
t_out_acc[i] = tmp;
}
return t_out;
}
Print statements then appear on the host's terminal with a header message prepended:
::
2023-Jan-26 00:25:02.0183 4057:4131 INFO TDRV:pool_stdio_queue_consume_all_entries Printing stdout from GPSIMD:
Setting element 0 to value -1.000000
Setting element 1 to value -2.000000
Setting element 2 to value -3.000000
Setting element 3 to value -4.000000
Setting element 4 to value -5.000000
Setting element 5 to value -6.000000
Setting element 6 to value -7.000000
Setting element 7 to value -8.000000
Limitations
^^^^^^^^^^^
* Performance: using ``printf()`` significantly degrades the operator's performance
* The programmer can disable it by unsetting ``NEURON_RT_GPSIMD_STDOUT_QUEUE_SIZE_BYTES`` or setting it to 0
* Disabling ``printf()`` is recommended if running the model in a performance-sensitive context
* To maximize performance, the programmer should remove calls to ``printf()`` from within the operator
* Even if disabled, calling the function incurs overhead
* Buffer size: output from ``printf()`` is buffered during model execution and read by the Neuron runtime after execution
* The model can still execute successfully if the programmer overflows the buffer
* Overflowing the buffer will cause the oldest data in it to be overwritten
* Print statements are processed and printed to the host's terminal at the end of model execution, not in real time
* ``printf`` is only supported in single core mode, or on GPSIMD core 0 only when using multiple GPSIMD cores.
* When using multiple GPSIMD cores, only ``TensorTcmAccessor`` is supported. Usage of other accessors will result in undefined behaviour.
* When using multiple GPSIMD cores, only one custom operator per model is currently supported.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _custom-ops-api-ref-guide:
Custom Operators API Reference Guide [Experimental]
===================================================
This page provides the documentation for the C++ API available to creators of Neuron custom C++ operators (see :ref:`neuron_c++customops`).
.. contents:: Table of contents
:local:
:depth: 1
Tensor Library
--------------
The tensor library used for Neuron custom C++ operators is based upon the PyTorch ATen tensor library. This includes the core Tensor class as well as select operations defined below. Users need to include the ``<torch/torch.h>`` header to access the tensor library. A small example of using the tensor library looks as follows.
.. code-block:: c++
#include <torch/torch.h>
...
torch::Tensor a = torch::zeros({32, 32, 3}, torch::kFloat);
Tensor Factory Functions
^^^^^^^^^^^^^^^^^^^^^^^^
The tensor factory functions provide different means for creating new tensors.
They each take in a ``size`` argument that specifies the size of each dimension of the tensor created (with the exception of ``eye``, which takes in two int64's and creates a strictly 2-dimensional identity matrix.)
``c10::TensorOptions`` allows the specification of optional properties for the tensor being created. Currently, only the ``dtype`` property has an effect on tensor construction, and it must be specified. Other properties, such as ``layout`` may be supported in the future.
The example above shows a common way to use factory functions.
The following dtypes are supported:
* torch::kFloat
* torch::kBFloat16
* torch::kHalf
* torch::kInt
* torch::kChar
* torch::kLong
* torch::kShort
* torch::kByte
.. cpp:function:: torch::Tensor empty(torch::IntArrayRef size, c10::TensorOptions options)
Creates a tensor filled with uninitialized data, with the specified size and options. Slightly faster than other factory functions since it skips writing data to the tensor.
.. cpp:function:: torch::Tensor full(torch::IntArrayRef size, const Scalar & fill_value, c10::TensorOptions options)
Creates a tensor filled with the specified ``fill_value``, with the specified size and options.
.. cpp:function:: torch::Tensor zeros(torch::IntArrayRef size, c10::TensorOptions options)
Creates a tensor filled with zeros, with the specified size and options.
.. cpp:function:: torch::Tensor ones(torch::IntArrayRef size, c10::TensorOptions options)
Creates a tensor filled with ones, with the specified size and options.
.. cpp:function:: torch::Tensor eye(int64_t n, int64_t m, c10::TensorOptions options)
Creates a 2-D tensor with ones on the diagonal and zeros elsewhere.
Tensor Operation Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^
The tensor library provides commonly used operations defined below. The tensor operation functions do not support broadcasting; the shape of the operands must match if applicable.
The library provides two styles of functions for each tensor operation. For functions ending with ``_out``, a tensor with the proper size must be provided to which the output is written. This is illustrated in the example below.
.. code-block:: c++
torch::exp_out(t_out, t_in);
Alternatively, for functions that do not end in ``_out``, a new tensor that contains the results of the operation is allocated and returned as seen in the example below.
.. code-block:: c++
torch::Tensor t_out = torch::exp(t_in);
.. warning::
Only operations that are documented below are supported.
.. cpp:function:: torch::Tensor& abs_out(torch::Tensor &result, torch::Tensor &self)
.. cpp:function:: torch::Tensor abs(torch::Tensor& self)
Computes the absolute value of each element in ``self``.
.. cpp:function:: torch::Tensor& ceil_out(torch::Tensor &result, torch::Tensor &self)
.. cpp:function:: torch::Tensor ceil(torch::Tensor &self)
Computes the ceiling of the elements of ``self``, the smallest integer greater than or equal to each element.
.. cpp:function:: torch::Tensor& floor_out(torch::Tensor& result, torch::Tensor &self)
.. cpp:function:: torch::Tensor floor(torch::Tensor &self)
Computes the floor of the elements of ``self``, the largest integer less than or equal to each element.
.. cpp:function:: torch::Tensor& sin_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor sin(torch::Tensor& self)
Computes the sine value of the elements of ``self``.
.. cpp:function:: torch::Tensor& cos_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor cos(torch::Tensor& self)
Computes the cosine value of the elements of ``self``.
.. cpp:function:: torch::Tensor& tan_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor tan(torch::Tensor& self)
Computes the tangent value of the elements of ``self``.
.. cpp:function:: torch::Tensor& log_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor log(torch::Tensor& self)
Computes the natural logarithm of the elements of ``self``.
.. cpp:function:: torch::Tensor& log2_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor log2(torch::Tensor& self)
Computes the base-2 logarithm of the elements of ``self``.
.. cpp:function:: torch::Tensor& log10_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor log10(torch::Tensor& self)
Computes the base-10 logarithm of the elements of ``self``.
.. cpp:function:: torch::Tensor& exp_out(torch::Tensor& result, torch::Tensor& self)
.. cpp:function:: torch::Tensor exp(torch::Tensor& self)
Computes the exponential of the elements of ``self``.
.. cpp:function:: torch::Tensor& pow_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar & exponent)
.. cpp:function:: torch::Tensor& pow_out(torch::Tensor& result, const torch::Scalar& self, const torch::Tensor & exponent)
.. cpp:function:: torch::Tensor& pow_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor & exponent)
.. cpp:function:: torch::Tensor pow(const torch::Tensor& self, const torch::Scalar & exponent)
.. cpp:function:: torch::Tensor pow(const torch::Scalar& self, const torch::Tensor & exponent)
.. cpp:function:: torch::Tensor pow(const torch::Tensor& self, const torch::Tensor & exponent)
Takes the power of each element in ``self`` with ``exponent``.
.. cpp:function:: torch::Tensor& clamp_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar& minval, const torch::Scalar& maxval)
.. cpp:function:: torch::Tensor clamp(const torch::Tensor& self, const torch::Scalar& minval, const torch::Scalar& maxval)
Clamps all elements in ``self`` into the range ``[minval, maxval]``.
.. cpp:function:: torch::Tensor& add_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor& add_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor add(const torch::Tensor& self, const torch::Scalar &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor add(const torch::Tensor& self, const torch::Tensor &other, const torch::Scalar& alpha=1)
Adds ``other``, scaled by ``alpha``, to ``input``,
.. math::
out = self + alpha \times other.
.. cpp:function:: torch::Tensor& sub_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor& sub_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor sub(const torch::Tensor& self, const torch::Tensor &other, const torch::Scalar& alpha=1)
.. cpp:function:: torch::Tensor sub(const torch::Tensor& self, const torch::Scalar& other, const torch::Scalar& alpha=1)
Subtracts ``other``, scaled by ``alpha``, to ``input``,
.. math::
out = self - alpha \times other.
.. cpp:function:: torch::Tensor& mul_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor& mul_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor mul(const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor mul(const torch::Tensor& self, const torch::Tensor &other)
Multiplies ``self`` by ``other``.
.. cpp:function:: torch::Tensor& div_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor& div_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor div(const torch::Tensor& self, const torch::Scalar &other)
.. cpp:function:: torch::Tensor div(const torch::Tensor& self, const torch::Tensor &other)
Divides ``self`` by ``other``.
.. note::
For tensor-tensor bitwise operations, all the bitwise operations are elementwise between two tensors. For scalar-tensor bitwise operations, the scalar is casted to the datatype of the tensor before computing the bitwise operation.
.. cpp:function:: torch::Tensor& bitwise_and_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor& bitwise_and_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor& bitwise_and_out(torch::Tensor& result, const torch::Scalar& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_and(const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_and(const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor bitwise_and(const torch::Scalar& self, const torch::Tensor& other)
Computes the bitwise AND of ``self`` and ``other``. The input tensors must be of integral types.
.. cpp:function:: torch::Tensor& bitwise_or_out(torch::Tensor& result, const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor& bitwise_or_out(torch::Tensor& result, const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor& bitwise_or_out(torch::Tensor& result, const torch::Scalar& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_or(const torch::Tensor& self, const torch::Tensor& other)
.. cpp:function:: torch::Tensor bitwise_or(const torch::Tensor& self, const torch::Scalar& other)
.. cpp:function:: torch::Tensor bitwise_or(const torch::Scalar& self, const torch::Tensor& other)
Computes the bitwise OR of ``self`` and ``other``. The input tensors must be of integral types.
.. cpp:function:: torch::Tensor& bitwise_not_out(torch::Tensor& result, const torch::Tensor& self)
.. cpp:function:: torch::Tensor bitwise_not(torch::Tensor& result, const torch::Tensor& self)
Computes the bitwise NOT of ``self``. The input tensor must be of integral types.
Class torch::Tensor
^^^^^^^^^^^^^^^^^^^
Constructors
""""""""""""
Users should not call the Tensor constructor directly but instead use one of the Tensor factory functions.
Member Functions
""""""""""""""""
.. cpp:function:: template<typename T, size_t N> TensorAccessor<T,N,true> accessor() const&
Return a ``TensorAccessor`` for element-wise random access of a Tensor's elements. Scalar type and dimension template parameters must be specified. This const-qualified overload returns a read-only ``TensorAccessor``, preventing the user from writing to Tensor elements. See the Tensor Accessors section below for more details.
.. cpp:function:: template<typename T, size_t N> TensorAccessor<T,N,false> accessor() &
Return a ``TensorAccessor`` for element-wise random access of a Tensor's elements. Scalar type and dimension template parameters must be specified. This non-const-qualified overload returns a ``TensorAccessor`` that can be used to both read and write to Tensor elements. See the Tensor Accessors section below for more details.
.. cpp:function:: template<typename T> TensorReadStreamAccessor<T> read_stream_accessor() const&
Opens a streaming accessor for read on a tensor. Template parameter ``T`` is the scalar type of the tensor data. See Streaming Accessors section below for more details.
.. cpp:function:: template<typename T> TensorWriteStreamAccessor<T> write_stream_accessor() &
Opens a streaming accessor for write on a tensor. Template parameter ``T`` is the scalar type of the tensor data. See Streaming Accessors section below for more details.
.. cpp:function:: CoherencyEnforcer::Policy get_accessor_coherence_policy() const
Get the Tensor accessor coherence policy. See Coherence section below for more details.
.. cpp:function:: void set_accessor_coherence_policy(CoherencyEnforcer::Policy policy) const
Set the Tensor accessor coherence policy. See Coherence section below for more details.
.. cpp:function:: TensorTcmAccessor<true> tcm_accessor() const&
Opens a TCM accessor on a tensor. This const-qualified overload returns a read-only ``TensorTcmAccessor``, preventing the user from writing to Tensor elements. See TCM Accessor section below for more details.
.. cpp:function:: TensorTcmAccessor<false> tcm_accessor() &
Opens a TCM accessor on a tensor. This non-const-qualified overload returns a ``TensorTcmAccessor`` that can be used to both read and write to Tensor elements. See TCM Accessor section below for more details.
.. cpp:function:: torch::Tensor& fill_(const torch::Scalar & value) const
Fill a tensor with the specified value.
Tensor Operators
""""""""""""""""
.. cpp:function:: Tensor& operator=(const Tensor &x) &
.. cpp:function:: Tensor& operator=(Tensor &&x) &
Assignment operators
Tensor Accessors
----------------
The standard tensor accessor provides element-wise random access to ``Tensor`` elements. They can be created by calling ``Tensor::accessor()``. It can be used similarly to the Pytorch ATen version (see https://pytorch.org/cppdocs/notes/tensor_basics.html#cpu-accessors). However, it is not as fast as other methods of accessing a ``Tensor``, such as the streaming accessor or TCM accessor.
Example Usage
^^^^^^^^^^^^^
Element-wise add of two 1D tensors using ``TensorAccessor``.
.. code-block:: c++
torch::Tensor tensor_add_compute(const torch::Tensor& t1, const torch::Tensor& t2) {
size_t num_elem = t1.numel();
assert(t1.sizes() == t2.sizes());
torch::Tensor t_out = torch::empty({num_elem}, torch::kFloat);
auto t1_acc = t1.accessor<float, 1>();
auto t2_acc = t2.accessor<float, 1>();
auto t_out_acc = t_out.accessor<float, 1>();
for (size_t i = 0; i < num_elem; i++) {
t_out_acc[i] = t1_acc[i] + t2_acc[i];
}
return t_out;
}
.. _custom-ops-ref-guide-mem-arch:
Memory Architecture
^^^^^^^^^^^^^^^^^^^
Tensor data is stored in NeuronCore memory. The various types of accessors enable users to access tensor data from their custom C++ operator code running on the GPSIMD engine.
.. image:: /neuron-customops/images/ncorev2_gpsimd_memory.png
:width: 600
Streaming Accessors
-------------------
Streaming accessors provide the user the ability to access ``Tensor`` elements in sequential order, faster than the standard tensor accessor. There are two stream accessor classes, one for reading and one for writing. Users should not construct stream accessors directly, but should get them from a ``Tensor`` using ``Tensor::read_stream_accessor`` and ``Tensor::write_stream_accessor()``.
An active stream accessor is defined as a stream accessor that has been instantiated and not yet closed (via the ``close()`` method or by going out-of-scope).
The user is responsible for managing stream accessors concurrently accessing the same ``Tensor``. For safest usage, no stream accessor should be active while there is an active ``TensorWriteStreamAccessor`` on the same ``Tensor``. The user may either have multiple ``TensorReadStreamAccessors`` active on the same ``Tensor``, or only have a single ``TensorWriteStreamAccessor`` active on that ``Tensor``. Stream accessors should not be used concurrently with standard tensor accessors on the same ``Tensor``.
An unlimited number of active stream accessors (in total, across all ``Tensors``) are functionally supported, but only up to 4 active stream accessors will be performant. Additional stream accessors beyond the 4th will have performance similar to that of a standard tensor accessor.
Example Usage
^^^^^^^^^^^^^
Element-wise add of two tensors using ``TensorWriteStreamAccessor`` and ``TensorWriteStreamAccessor``.
.. code-block:: c++
torch::Tensor tensor_add_compute(const torch::Tensor& t1, const torch::Tensor& t2) {
assert(t1.sizes() == t2.sizes());
torch::Tensor t_out = torch::empty(t1.sizes(), torch::kFloat);
auto t1_rd_stm_acc = t1.read_stream_accessor<float>();
auto t2_rd_stm_acc = t2.read_stream_accessor<float>();
auto t_out_wr_stm_acc = t_out.write_stream_accessor<float>();
for (int i = 0; i < t1.numel(); i++) {
auto sum = t1_rd_stm_acc.read() + t2_rd_stm_acc.read();
t_out_wr_stm_acc.write(sum);
}
return t_out;
}
Class torch::TensorWriteStreamAccessor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:class:: template<typename T> class TensorReadStreamAccessor
The class template parameter ``T`` is the scalar type of the tensor data.
Member Functions
""""""""""""""""
.. cpp:function:: T read()
Reads from next element in the stream. User is responsible for knowing when to stop reading from ``TensorReadStreamAccessor``. Reading past the end of the stream or on a closed stream results in undefined behaviour.
.. cpp:function:: int close()
Closes stream. Do not read from the stream after calling ``close()``.
Class torch::TensorWriteStreamAccessor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:class:: template<typename T> class torch::TensorWriteStreamAccessor
The class template parameter ``T`` is the scalar type of the tensor data.
Member Functions
""""""""""""""""
.. cpp:function:: void write(T value)
Writes to next element in the stream. Written value is not guaranteed to be written back to the Tensor's memory until the ``TensorWriteStreamAccessor`` goes out of scope, or the user explicitly calls ``close()``. User is responsible for knowing when to stop writing to a stream accessor. Writing past the end of the stream or on a closed stream results in undefined behaviour.
.. cpp:function:: int close()
Closes stream. Flushes write data to the ``Tensor``'s memory. Do not write to the stream after calling ``close()``.
Coherence
^^^^^^^^^
Stream accessors cache ``Tensor`` data in GPSIMD tightly-coupled memory (TCM), but do not ensure their caches remain coherent. When exactly they read from or write back to NeuronCore memory is opaque to the user (except for ``close()`` which forces a write back).
The safest way to use them is to ensure that no stream accessor is active (instantiated and not yet closed) while there is an active write stream accessor on the same ``Tensor``. The user should either have multiple read stream accessors active on the same ``Tensor``, or only have a single write stream accessor active on that ``Tensor``.
The standard tensor accessors read/write NeuronCore memory directly. Therefore, tensor accessors can safely concurrently access the same ``Tensor``, but it is safest not to use them concurrently with stream accessors since NeuronCore memory isn't guaranteed to be coherent with the stream accessor caches.
These coarse-grained guidelines are best practices, but it is possible to ignore them with careful usage of the accessors (making sure elements are read before they are written to, elements written to are written back before being read again, etc).
The coherence policy of a ``Tensor`` determines what to do when there is potentially incoherent access by an accessor of that ``Tensor``. It can either cause an error, or allow it but print a warning, or do nothing. In the case of the latter two options, it is the user's responsibility to ensure they carefully use accessors coherently. Coherence policy for ``Tensors`` is ``torch::CoherencyEnforcer::Policy::COHERENT`` by default, but can be changed using ``Tensor::set_accessor_coherence_policy()``.
.. code-block:: c++
// class torch::CoherencyEnforcer
enum Policy {
// Enforce a resource is acquired in a way that guarantees coherence
// Causes an error if it encounters potentially incoherent access
COHERENT,
// Allows potentially incoherent access, but will print a warning
INCOHERENT_VERBOSE,
// Allows potentially incoherent access, no error or warnings
INCOHERENT_QUIET
};
TCM Accessor
------------
TCM accessors provide the fastest read and write performance. TCM accessors allow the user to manually manage copying data between larger, but slower-access NeuronCore memory to faster GPSIMD tightly-coupled memory (TCM). It may be beneficial to see the diagram under :ref:`custom-ops-ref-guide-mem-arch`. Create a ``TensorTcmAccessor`` from a ``Tensor`` by calling ``Tensor::tcm_accessor()``. Users can allocate and free TCM memory using ``tcm_malloc()`` and ``tcm_free()``. Users have access to a 16KB pool of TCM memory. Note the streaming accessors also allocate from this pool (4KB each). TCM accessors do not do any coherence checks.
.. note::
See :ref:`neuronx-customop-mlp-perf` for a tutorial on how to use TCM accessors.
Example Usage
^^^^^^^^^^^^^
Element-wise negate of a tensor using ``TensorTcmAccessor``.
.. code-block:: c++
torch::Tensor tensor_negate_compute(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::empty(t_in.sizes(), torch::kFloat);
static constexpr size_t buffer_size = 1024;
float *tcm_buffer = (float *)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
if (tcm_buffer != nullptr) {
// tcm_malloc allocated successfully, use TensorTcmAccessor
auto t_in_tcm_acc = t_in.tcm_accessor();
auto t_out_tcm_acc = t_out.tcm_accessor();
for (size_t i = 0; i < num_elem; i += buffer_size) {
size_t remaining_elem = num_elem - i;
size_t copy_size = (remaining_elem > buffer_size) ? buffer_size : remaining_elem;
t_in_tcm_acc.tensor_to_tcm<float>(tcm_buffer, i, copy_size);
for (size_t j = 0; j < copy_size; j++) {
tcm_buffer[j] *= -1;
}
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, i, copy_size);
}
torch::neuron::tcm_free(tcm_buffer);
} else {
// Handle not enough memory...
}
return t_out;
}
TCM Management Functions
^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:function:: void * torch::neuron::tcm_malloc(size_t nbytes)
Allocate ``nbytes`` bytes of memory from TCM and return pointer to this memory. Upon failure, returns null.
.. cpp:function:: void torch::neuron::tcm_free(void * ptr)
Free memory that was allocated by ``tcm_malloc()``. Undefined behaviour if ``ptr`` was not returned from a previous call to ``tcm_malloc()``.
Class torch::TensorTcmAccessor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. cpp:class:: template<bool read_only> class torch::TensorTcmAccessor
The ``read_only`` template parameter controls whether or not you can write to the accessor's ``Tensor``. A ``const Tensor`` will return a read-only ``TensorTcmAccessor`` from ``Tensor::tcm_accessor()``.
Member Functions
""""""""""""""""
.. cpp:function:: template<typename T> void tensor_to_tcm(T * tcm_ptr, size_t tensor_offset, size_t num_elem)
Copy ``num_elem`` elements from the accessor's ``Tensor`` starting at the index ``tensor_offset`` to a TCM buffer starting at ``tcm_ptr``. Tensor indexing is performed as if the tensor was flattened. Template parameter ``T`` is the scalar type of the tensor data. The TCM buffer's size should be at least ``sizeof(T) * num_elem`` bytes.
.. cpp:function:: template<typename T> void tcm_to_tensor(T * tcm_ptr, size_t tensor_offset, size_t num_elem)
Copy ``num_elem`` elements from a TCM buffer starting at ``tcm_ptr`` to the accessor's ``Tensor`` starting at the index ``tensor_offset``. Tensor indexing is performed as if the tensor was flattened. The TCM buffer's size should be at least ``sizeof(T) * num_elem`` bytes.
Writing Directly to Output Tensor
---------------------------------
.. cpp:function:: torch::Tensor get_dst_tensor()
Returns a reference to the Custom C++ operator output tensor (return value). If this method is called, it is assumed that data will be written to this output tensor, and the tensor returned from the C++ operator will be ignored. Using this method will improve performance by avoiding additional copying of the return value. See example below for function usage.
.. code-block:: c++
:emphasize-lines: 4, 12
// Example of write to get_dst_tensor()
torch::Tensor example_kernel(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = get_dst_tensor();
auto t_out_tcm_acc = t_out.tcm_accessor();
float *tcm_buffer = (float *)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
// Populate tcm_buffer with results
...
// Write to t_out throught tcm_accessor
t_out_acc.tcm_to_tensor<float>(tcm_buffer, offset, copy_size);
...
}
Using multiple GPSIMD cores
---------------------------
.. note::
See :ref:`neuronx-customop-mlp-perf` for a tutorial on how to use multiple GPSIMD cores to execute the Custom C++ Operator
By default, Custom C++ operators target a single core of the GPSIMD-Engine. Performance of Custom C++ operators can be improved by targeting multiple cores. To enable usage of multiple GPSIMD cores, ``multicore=True`` should be passed to ``custom_op.load()``.
.. code-block:: python
:emphasize-lines: 6
custom_op.load(
name=name,
compute_srcs=compute_srcs,
shape_srcs=shape_srcs,
build_directory=os.getcwd(),
multicore=True
)
Each GPSIMD core executes the same kernel function. The user can control the execution on each core by conditioning the Custom C++ operator logic on the core id (obtained via ``get_cpu_id()`` API). This is illustrated in the example below.
The following functions are defined in ``neuron/neuron-utils.hpp``
.. cpp:function:: uint32_t get_cpu_id()
Return the id of the core that the Custom C++ operator is executing on, id is in range ``[0, get_cpu_count())``
.. cpp:function:: uint32_t get_cpu_count()
Return the total number of available GPSIMD cores.
.. code-block:: c++
:emphasize-lines: 5, 6, 15
torch::Tensor example_kernel(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = get_dst_tensor();
uint32_t cpu_id = get_cpu_id();
uint32_t cpu_count = get_cpu_count();
uint32_t partition = num_elem / cpu_count;
float *tcm_buffer = (float *)torch::neuron::tcm_malloc(sizeof(float) * buffer_size);
// Populate tcm_buffer with desired results
...
// Write to t_out with a offset computed from cpu_id and cpu_count
t_out_tcm_acc.tcm_to_tensor<float>(tcm_buffer, partition*cpu_id, copy_size);
...
}
Return Value Handling
^^^^^^^^^^^^^^^^^^^^^
When using multiple GPSIMD cores, the ``get_dst_tensor()`` API must be used to write the return value of the Custom C++ operators. Data not written to the tensor reference returned by ``get_dst_tensor()``, or not invoking ``get_dst_tensor()`` will result in undefined behavior. The user is responsible for writing the appropriate portion of the output reference tensor from a given GPSIMD core. Since there is no synchronization between GPSIMD cores, it is advised that each GPSIMD core writes to a mutually exclusive partition of the output reference tensor.
printf()
--------------
Custom C++ operators support the use of C++'s ``printf()`` to send information to the host's terminal. Using ``printf()`` is the recommended approach to functional debug. With it, the programmer can check the value of inputs, outputs, intermediate values, and control flow within their operator.
Usage
^^^^^
To use ``printf()`` within a Custom C++ operator, the programmer must set the following environment variables before running their model in order to receive the messages printed by their operator:
.. list-table:: Environment Variables
:widths: 50 200 20 200 200
:header-rows: 1
* - Name
- Description
- Type
- Value to Enable printf
- Default Value
* - ``NEURON_RT_LOG_LEVEL``
- Runtime log verbose level
- String
- At least ``INFO``
- See (https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-runtime/nrt-configurable-parameters.html?highlight=NEURON_RT_LOG_LEVEL#neuron-runtime-configuration) for more options.
* - ``NEURON_RT_GPSIMD_STDOUT_QUEUE_SIZE_BYTES``
- Size of the printf output buffer, in bytes
- Integer
- Any power of two that is equal to or less than ``2097152`` (2MB)
- Recommend setting a value of ``2097152`` to maximize the size of printf's buffer. Setting a value of 0 disables printf.
Within a Custom C++ operator, ``printf()`` can be used as normal from within a C++ program. For more information, consult a reference such as (https://cplusplus.com/reference/cstdio/printf/)
Example
^^^^^^^
.. code-block:: c++
#include <torch/torch.h>
#include <stdio.h> // Contains printf()
torch::Tensor tensor_negate_compute(const torch::Tensor& t_in) {
size_t num_elem = t_in.numel();
torch::Tensor t_out = torch::zeros({num_elem}, torch::kFloat);
auto t_in_acc = t_in.accessor<float, 1>();
auto t_out_acc = t_out.accessor<float, 1>();
for (size_t i = 0; i < num_elem; i++) {
float tmp = -1 * t_in_acc[i];
printf("Assigning element %d to a value of %f\n", i, tmp);
t_out_acc[i] = tmp;
}
return t_out;
}
Print statements then appear on the host's terminal with a header message prepended:
::
2023-Jan-26 00:25:02.0183 4057:4131 INFO TDRV:pool_stdio_queue_consume_all_entries Printing stdout from GPSIMD:
Setting element 0 to value -1.000000
Setting element 1 to value -2.000000
Setting element 2 to value -3.000000
Setting element 3 to value -4.000000
Setting element 4 to value -5.000000
Setting element 5 to value -6.000000
Setting element 6 to value -7.000000
Setting element 7 to value -8.000000
Limitations
^^^^^^^^^^^
* Performance: using ``printf()`` significantly degrades the operator's performance
* The programmer can disable it by unsetting ``NEURON_RT_GPSIMD_STDOUT_QUEUE_SIZE_BYTES`` or setting it to 0
* Disabling ``printf()`` is recommended if running the model in a performance-sensitive context
* To maximize performance, the programmer should remove calls to ``printf()`` from within the operator
* Even if disabled, calling the function incurs overhead
* Buffer size: output from ``printf()`` is buffered during model execution and read by the Neuron runtime after execution
* The model can still execute successfully if the programmer overflows the buffer
* Overflowing the buffer will cause the oldest data in it to be overwritten
* Print statements are processed and printed to the host's terminal at the end of model execution, not in real time
* ``printf`` is only supported in single core mode, or on GPSIMD core 0 only when using multiple GPSIMD cores.
* When using multiple GPSIMD cores, only ``TensorTcmAccessor`` is supported. Usage of other accessors will result in undefined behaviour.
* When using multiple GPSIMD cores, only one custom operator per model is currently supported.</pre></body></html>
|
2023-09-29T20:54:59.827Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuron-sys-tools/neuron-top-user-guide.rst.txt
|
```
.. _neuron-top-ug:
Neuron Top User Guide
=====================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
``neuron-top`` provides useful information about NeuronCore and vCPU utilization, memory usage,
loaded models, and Neuron applications.
.. note::
``neuron-top`` fully supports the newly launched inf2 instances.
.. note::
If you are parsing ``neuron-top`` output in your automation environment, you can now replace it with ``neuron-monitor``
(:ref:`neuron-monitor-ug`) which outputs data in a standardized, easier to parse JSON format.
Using neuron-top
----------------
Command line arguments
~~~~~~~~~~~~~~~~~~~~~~
Launch ``neuron-top`` by simply typing its name in the shell: ``neuron-top``.
User interface
~~~~~~~~~~~~~~
The title section of the user interface shows the application's version number,
EC2 instance ID, and the instance type on which it is running:
|titleimg|
The rest of the user interface is divided in 4 sections. The data shown in these
sections applies to the currently selected tab - which can be the 'all' tab,
which aggregates data from all running Neuron processes, or a tab representing
a single Neuron process:
|overview|
* The ``NeuronCore <vers> Utilization`` section shows the NeuronCore utilization for the
currently selected tab. ``<vers>`` is the version of the NeuronCores on the instance (for example,
``v2`` for trn1 instances and inf2 instances).
Pressing the 'F' key will toggle between displaying utilization percentages - as seen in the previous image -
and teraflops (trillion floating point operations per second), as seen in the image below:
|flops|
* The ``VCPU Utilization`` section shows:
* ``System vCPU usage`` - the two percentages are user% and system%
* ``Runtime vCPU usage`` - same breakdown
.. _neuron_top_mem_usage:
* The ``Memory Usage Summary`` section provides a breakdown of the total memory usage on the Neuron Device as well
as on the host:
.. _neuron_top_host_mem_usage:
* ``Host Used Memory`` - amount of host memory used by the selected application (or an aggregate of all applications if 'All' is selected)
* ``Total`` - total amount of host memory used
* ``Tensors`` - amount of host memory used for tensors
* ``Constants`` - amount of host memory used for constants (for applications running training) or weights (for applications running inferences)
* ``DMA Buffers`` - amount of host memory used for DMA transfers
* ``App. Memory`` - amount of host memory used by the application that doesn't fall in any of the previous categories
.. _neuron_top_device_mem_usage:
* ``Device Used Memory`` - amount of device memory used by the selected application (or an aggregate of all applications if 'All' is selected)
* ``Total`` - total amount of device memory used
* ``Tensors`` - amount of device memory used for tensors
* ``Constants`` - amount of device memory used for constants (for applications running training) or weights (for applications running inferences)
* ``Model Code`` - amount of device memory used for storing model executable code
* ``Runtime Memory`` - amount of device memory used by the Neuron Runtime (outside of the previous categories)
* ``Model Scratchpad`` - amount of device memory used for the shared model scratchpad, a shared buffer used for internal model variables and other
auxiliary buffers
* ``Memory Usage Details`` contains memory usage data organized as a tree which can be expanded/collapsed. The columns are:
* ``Model ID`` - the Neuron Runtime identifier for this model instance
* ``Host Memory`` - amount of host memory used
* ``Device Memory`` - amount of device memory used
The tree view shows the amount of memory used for the same categories shown in the ``Memory Usage Summary`` but in this section
they are attached to either a model (if the memory has been allocated at model load time for that model), or to a NeuronCore (if
the memory can't be associated with a model, but has been allocated for that NeuronCore).
The 'parent' shows the total amount of memory used - the sum of its children.
.. note::
The up/down/left/right keys can be used to navigate the tree view. The 'x' key expands/collapses the
entire tree.
The bottom bar shows which Neuron process' data is currently displayed by highlighting
its tag using a green font and marking it using a pair of '>', '<' characters. The 'all'
tab shows an aggregated view of all the Neuron processes currently running on the instance.
|tabbar|
.. note::
The '1'-'9' keys select the current tab. 'a'/'d' selects the previous/next
tab on the bar.
.. |titleimg| image:: ../../images/nt-title.png
.. |overview| image:: ../../images/nt-1.png
.. |flops| image:: ../../images/nt-flops.png
.. |tabbar| image:: ../../images/nt-2.png
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-top-ug:
Neuron Top User Guide
=====================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
``neuron-top`` provides useful information about NeuronCore and vCPU utilization, memory usage,
loaded models, and Neuron applications.
.. note::
``neuron-top`` fully supports the newly launched inf2 instances.
.. note::
If you are parsing ``neuron-top`` output in your automation environment, you can now replace it with ``neuron-monitor``
(:ref:`neuron-monitor-ug`) which outputs data in a standardized, easier to parse JSON format.
Using neuron-top
----------------
Command line arguments
~~~~~~~~~~~~~~~~~~~~~~
Launch ``neuron-top`` by simply typing its name in the shell: ``neuron-top``.
User interface
~~~~~~~~~~~~~~
The title section of the user interface shows the application's version number,
EC2 instance ID, and the instance type on which it is running:
|titleimg|
The rest of the user interface is divided in 4 sections. The data shown in these
sections applies to the currently selected tab - which can be the 'all' tab,
which aggregates data from all running Neuron processes, or a tab representing
a single Neuron process:
|overview|
* The ``NeuronCore <vers> Utilization`` section shows the NeuronCore utilization for the
currently selected tab. ``<vers>`` is the version of the NeuronCores on the instance (for example,
``v2`` for trn1 instances and inf2 instances).
Pressing the 'F' key will toggle between displaying utilization percentages - as seen in the previous image -
and teraflops (trillion floating point operations per second), as seen in the image below:
|flops|
* The ``VCPU Utilization`` section shows:
* ``System vCPU usage`` - the two percentages are user% and system%
* ``Runtime vCPU usage`` - same breakdown
.. _neuron_top_mem_usage:
* The ``Memory Usage Summary`` section provides a breakdown of the total memory usage on the Neuron Device as well
as on the host:
.. _neuron_top_host_mem_usage:
* ``Host Used Memory`` - amount of host memory used by the selected application (or an aggregate of all applications if 'All' is selected)
* ``Total`` - total amount of host memory used
* ``Tensors`` - amount of host memory used for tensors
* ``Constants`` - amount of host memory used for constants (for applications running training) or weights (for applications running inferences)
* ``DMA Buffers`` - amount of host memory used for DMA transfers
* ``App. Memory`` - amount of host memory used by the application that doesn't fall in any of the previous categories
.. _neuron_top_device_mem_usage:
* ``Device Used Memory`` - amount of device memory used by the selected application (or an aggregate of all applications if 'All' is selected)
* ``Total`` - total amount of device memory used
* ``Tensors`` - amount of device memory used for tensors
* ``Constants`` - amount of device memory used for constants (for applications running training) or weights (for applications running inferences)
* ``Model Code`` - amount of device memory used for storing model executable code
* ``Runtime Memory`` - amount of device memory used by the Neuron Runtime (outside of the previous categories)
* ``Model Scratchpad`` - amount of device memory used for the shared model scratchpad, a shared buffer used for internal model variables and other
auxiliary buffers
* ``Memory Usage Details`` contains memory usage data organized as a tree which can be expanded/collapsed. The columns are:
* ``Model ID`` - the Neuron Runtime identifier for this model instance
* ``Host Memory`` - amount of host memory used
* ``Device Memory`` - amount of device memory used
The tree view shows the amount of memory used for the same categories shown in the ``Memory Usage Summary`` but in this section
they are attached to either a model (if the memory has been allocated at model load time for that model), or to a NeuronCore (if
the memory can't be associated with a model, but has been allocated for that NeuronCore).
The 'parent' shows the total amount of memory used - the sum of its children.
.. note::
The up/down/left/right keys can be used to navigate the tree view. The 'x' key expands/collapses the
entire tree.
The bottom bar shows which Neuron process' data is currently displayed by highlighting
its tag using a green font and marking it using a pair of '>', '<' characters. The 'all'
tab shows an aggregated view of all the Neuron processes currently running on the instance.
|tabbar|
.. note::
The '1'-'9' keys select the current tab. 'a'/'d' selects the previous/next
tab on the bar.
.. |titleimg| image:: ../../images/nt-title.png
.. |overview| image:: ../../images/nt-1.png
.. |flops| image:: ../../images/nt-flops.png
.. |tabbar| image:: ../../images/nt-2.png
</pre></body></html>
|
2023-09-29T20:54:59.899Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_downloads/7060cd2442eb9c76edfb394bffdc9721/neuron-monitor-grafana.json
|
```
{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 2,
"iteration": 1605138719380,
"links": [],
"panels": [
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {
"align": null,
"filterable": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Value"
},
"properties": [
{
"id": "custom.width",
"value": 163
}
]
},
{
"matcher": {
"id": "byName",
"options": "Field"
},
"properties": [
{
"id": "custom.width",
"value": 450
}
]
},
{
"matcher": {
"id": "byName",
"options": "ami_id"
},
"properties": [
{
"id": "custom.width",
"value": 217
}
]
},
{
"matcher": {
"id": "byName",
"options": "instance_type"
},
"properties": [
{
"id": "custom.width",
"value": 391
}
]
},
{
"matcher": {
"id": "byName",
"options": "Prometheus instance"
},
"properties": [
{
"id": "custom.width",
"value": 641
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"id": 8,
"options": {
"showHeader": true,
"sortBy": []
},
"pluginVersion": "7.2.1",
"repeat": null,
"targets": [
{
"expr": "instance_info",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Instance Info",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"ami_id": false,
"instance": true,
"job": true
},
"indexByName": {
"Time": 0,
"Value": 7,
"__name__": 1,
"availability_zone": 8,
"instance": 5,
"instance_id": 2,
"instance_name": 3,
"instance_type": 4,
"job": 6,
"region": 9,
"subnet_id": 10
},
"renameByName": {
"Value": "",
"availability_zone": "Availability Zone",
"instance": "",
"instance_id": "Instance ID",
"instance_name": "Instance Name",
"instance_type": "Instance Type",
"region": "Region",
"subnet_id": "Subnet"
}
}
}
],
"type": "table"
},
{
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "super-light-yellow",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 0,
"y": 8
},
"id": 36,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"last"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "count(instance_info)\n",
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Instance Count",
"type": "stat"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "light-blue",
"value": null
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 3,
"y": 8
},
"id": 10,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "sum (system_vcpu_count)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "vCPU Count",
"type": "stat"
},
{
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "#EAB839",
"value": 70
},
{
"color": "orange",
"value": 80
},
{
"color": "semi-dark-red",
"value": 90
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 6,
"y": 8
},
"id": 20,
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showThresholdLabels": true,
"showThresholdMarkers": true
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "avg(sum by (instance_id) (system_vcpu_usage_ratio))",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "vCPU Utilization",
"type": "gauge"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 70
},
{
"color": "orange",
"value": 80
},
{
"color": "red",
"value": 90
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 9,
"y": 8
},
"id": 16,
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showThresholdLabels": true,
"showThresholdMarkers": true
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "avg(system_memory_used_bytes / system_memory_total_bytes)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Host Memory Usage",
"type": "gauge"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "rgb(191, 151, 105)",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 12,
"y": 8
},
"id": 12,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "count(neuroncore_utilization_ratio > 0)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "NeuronCores in Use",
"transformations": [],
"type": "stat"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {
"align": null,
"filterable": false
},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "red",
"value": null
},
{
"color": "orange",
"value": 5
},
{
"color": "yellow",
"value": 20
},
{
"color": "green",
"value": 35
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 15,
"y": 8
},
"id": 4,
"interval": "",
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showThresholdLabels": true,
"showThresholdMarkers": true
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "avg(neuroncore_utilization_ratio)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "NeuronCore Utilization",
"type": "gauge"
},
{
"datasource": "Prometheus",
"description": "",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "cps"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 18,
"y": 8
},
"id": 6,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "sum(rate(execution_status_total{status_type=\"completed\"}[1m]))",
"hide": false,
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Execution Success Rate",
"transformations": [],
"type": "stat"
},
{
"datasource": "Prometheus",
"description": "",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 1
}
]
},
"unit": "cps"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 21,
"y": 8
},
"id": 18,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "sum(rate(execution_status_total{status_type!=\"completed\"}[1m]))",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Execution Error Rate",
"type": "stat"
},
{
"aliasColors": {
"Inf Error Rate": "semi-dark-red",
"Inf Success Rate": "light-green"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 12,
"x": 0,
"y": 13
},
"hiddenSeries": false,
"id": 32,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(execution_status_total{status_type=\"completed\"}[1m]))",
"interval": "",
"legendFormat": "Execution Success Rate",
"refId": "A"
},
{
"expr": "sum(rate(execution_status_total{status_type!=\"completed\"}[1m]))",
"interval": "",
"legendFormat": "Execution Error Rate",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Execution Status Rates",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:547",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:548",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"p0": "dark-green",
"p1": "semi-dark-green",
"p100": "semi-dark-red",
"p25": "light-green",
"p50": "super-light-green",
"p75": "super-light-red",
"p99": "light-red",
"{percentile=\"p0\"}": "dark-green",
"{percentile=\"p1\"}": "semi-dark-green",
"{percentile=\"p100\"}": "dark-red",
"{percentile=\"p25\"}": "light-green",
"{percentile=\"p50\"}": "super-light-green",
"{percentile=\"p75\"}": "light-red",
"{percentile=\"p99\"}": "semi-dark-red"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"description": "",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 12,
"x": 12,
"y": 13
},
"hiddenSeries": false,
"id": 34,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 1,
"points": true,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg by (percentile) (execution_latency_seconds)",
"interval": "",
"legendFormat": "{{percentile}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Execution Latency",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:61",
"format": "s",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:62",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 0,
"y": 25
},
"hiddenSeries": false,
"id": 30,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg by (neuroncore) (neuroncore_utilization_ratio)",
"interval": "",
"legendFormat": "nc{{neuroncore}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "NeuronCore Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:493",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:494",
"format": "short",
"label": null,
"logBase": 1,
"max": "100",
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"Runtime system CPU usage ": "light-red",
"Runtime user CPU usage ": "light-green"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 8,
"y": 25
},
"hiddenSeries": false,
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "avg by (usage_type) (neuron_runtime_vcpu_usage_ratio)",
"format": "time_series",
"instant": false,
"interval": "",
"legendFormat": "Neuron Runtime {{usage_type}} CPU usage ",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Neuron Runtime vCPU Usage",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:385",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:386",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"host": "rgb(0, 217, 255)",
"neuron_device": "super-light-orange"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "bytes"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 16,
"y": 25
},
"hiddenSeries": false,
"id": 28,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg by (memory_location) (sum by (instance_id, memory_location) (neuron_runtime_memory_used_bytes))",
"interval": "",
"legendFormat": "{{memory_location}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Neuron Runtime Used Memory",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:439",
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:440",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"Memory Usage": "rgb(0, 217, 255)",
"NeuronCore Usage": "light-orange",
"vCPU Usage": "light-blue"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 0,
"y": 37
},
"hiddenSeries": false,
"id": 22,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(system_memory_used_bytes / system_memory_total_bytes)",
"instant": false,
"interval": "",
"legendFormat": "Memory Usage",
"refId": "A"
},
{
"expr": "avg(sum by (instance_id) (system_vcpu_usage_ratio))",
"instant": false,
"interval": "",
"legendFormat": "vCPU Usage",
"refId": "B"
},
{
"expr": "avg(neuroncore_utilization_ratio)",
"instant": false,
"interval": "",
"legendFormat": "NeuronCore Usage",
"refId": "C"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Host System Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:664",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:665",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"system": "light-red",
"user": "light-green"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 8,
"y": 37
},
"hiddenSeries": false,
"id": 24,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "avg by (usage_type) (system_vcpu_usage_ratio)",
"interval": "",
"legendFormat": "{{usage_type}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Host vCPU Usage",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:876",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:877",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"Memory Usage Bytes": "rgb(223, 180, 0)",
"Memory Usage Percent": "rgb(0, 217, 255)"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "short"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Memory Usage Percent"
},
"properties": [
{
"id": "unit",
"value": "percentunit"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Memory Usage Bytes"
},
"properties": [
{
"id": "unit",
"value": "bytes"
}
]
}
]
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 16,
"y": 37
},
"hiddenSeries": false,
"id": 26,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [
{
"$$hashKey": "object:711"
},
{
"$$hashKey": "object:931",
"alias": "Memory Usage Bytes",
"yaxis": 2
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(system_memory_used_bytes / system_memory_total_bytes)",
"instant": false,
"interval": "",
"legendFormat": "Memory Usage Percent",
"refId": "A"
},
{
"expr": "avg(system_memory_used_bytes)",
"instant": false,
"interval": "",
"legendFormat": "Memory Usage Bytes",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Host Memory Usage",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:689",
"format": "percentunit",
"label": "",
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:690",
"decimals": null,
"format": "bytes",
"label": "",
"logBase": 1,
"max": null,
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": "5s",
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"datasource": "Prometheus",
"filters": [],
"hide": 0,
"label": "",
"name": "Filters",
"skipUrlSync": false,
"type": "adhoc"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "neuron-monitor",
"uid": "EqWNYf5Mz",
"version": 68
}
```
|
<html><head><meta name="color-scheme" content="light dark"><meta charset="utf-8"></head><body style="margin: 0"><div></div><pre>{
"annotations": {
"list": [
{
"builtIn": 1,
"datasource": "-- Grafana --",
"enable": true,
"hide": true,
"iconColor": "rgba(0, 211, 255, 1)",
"name": "Annotations & Alerts",
"type": "dashboard"
}
]
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"id": 2,
"iteration": 1605138719380,
"links": [],
"panels": [
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {
"align": null,
"filterable": false
},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
}
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Value"
},
"properties": [
{
"id": "custom.width",
"value": 163
}
]
},
{
"matcher": {
"id": "byName",
"options": "Field"
},
"properties": [
{
"id": "custom.width",
"value": 450
}
]
},
{
"matcher": {
"id": "byName",
"options": "ami_id"
},
"properties": [
{
"id": "custom.width",
"value": 217
}
]
},
{
"matcher": {
"id": "byName",
"options": "instance_type"
},
"properties": [
{
"id": "custom.width",
"value": 391
}
]
},
{
"matcher": {
"id": "byName",
"options": "Prometheus instance"
},
"properties": [
{
"id": "custom.width",
"value": 641
}
]
}
]
},
"gridPos": {
"h": 8,
"w": 24,
"x": 0,
"y": 0
},
"id": 8,
"options": {
"showHeader": true,
"sortBy": []
},
"pluginVersion": "7.2.1",
"repeat": null,
"targets": [
{
"expr": "instance_info",
"format": "table",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Instance Info",
"transformations": [
{
"id": "organize",
"options": {
"excludeByName": {
"Time": true,
"Value": true,
"__name__": true,
"ami_id": false,
"instance": true,
"job": true
},
"indexByName": {
"Time": 0,
"Value": 7,
"__name__": 1,
"availability_zone": 8,
"instance": 5,
"instance_id": 2,
"instance_name": 3,
"instance_type": 4,
"job": 6,
"region": 9,
"subnet_id": 10
},
"renameByName": {
"Value": "",
"availability_zone": "Availability Zone",
"instance": "",
"instance_id": "Instance ID",
"instance_name": "Instance Name",
"instance_type": "Instance Type",
"region": "Region",
"subnet_id": "Subnet"
}
}
}
],
"type": "table"
},
{
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "super-light-yellow",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 0,
"y": 8
},
"id": 36,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"last"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "count(instance_info)\n",
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Instance Count",
"type": "stat"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "light-blue",
"value": null
}
]
},
"unit": "none"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 3,
"y": 8
},
"id": 10,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "sum (system_vcpu_count)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "vCPU Count",
"type": "stat"
},
{
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "#EAB839",
"value": 70
},
{
"color": "orange",
"value": 80
},
{
"color": "semi-dark-red",
"value": 90
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 6,
"y": 8
},
"id": 20,
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showThresholdLabels": true,
"showThresholdMarkers": true
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "avg(sum by (instance_id) (system_vcpu_usage_ratio))",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "vCPU Utilization",
"type": "gauge"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "yellow",
"value": 70
},
{
"color": "orange",
"value": 80
},
{
"color": "red",
"value": 90
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 9,
"y": 8
},
"id": 16,
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showThresholdLabels": true,
"showThresholdMarkers": true
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "avg(system_memory_used_bytes / system_memory_total_bytes)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Host Memory Usage",
"type": "gauge"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "rgb(191, 151, 105)",
"value": null
}
]
}
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 12,
"y": 8
},
"id": 12,
"options": {
"colorMode": "value",
"graphMode": "none",
"justifyMode": "center",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "count(neuroncore_utilization_ratio > 0)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "NeuronCores in Use",
"transformations": [],
"type": "stat"
},
{
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {
"align": null,
"filterable": false
},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "red",
"value": null
},
{
"color": "orange",
"value": 5
},
{
"color": "yellow",
"value": 20
},
{
"color": "green",
"value": 35
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 15,
"y": 8
},
"id": 4,
"interval": "",
"options": {
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"showThresholdLabels": true,
"showThresholdMarkers": true
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "avg(neuroncore_utilization_ratio)",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "NeuronCore Utilization",
"type": "gauge"
},
{
"datasource": "Prometheus",
"description": "",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "percentage",
"steps": [
{
"color": "green",
"value": null
}
]
},
"unit": "cps"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 18,
"y": 8
},
"id": 6,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "sum(rate(execution_status_total{status_type=\"completed\"}[1m]))",
"hide": false,
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Execution Success Rate",
"transformations": [],
"type": "stat"
},
{
"datasource": "Prometheus",
"description": "",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 1
}
]
},
"unit": "cps"
},
"overrides": []
},
"gridPos": {
"h": 5,
"w": 3,
"x": 21,
"y": 8
},
"id": 18,
"options": {
"colorMode": "value",
"graphMode": "area",
"justifyMode": "auto",
"orientation": "auto",
"reduceOptions": {
"calcs": [
"mean"
],
"fields": "",
"values": false
},
"textMode": "auto"
},
"pluginVersion": "7.2.1",
"targets": [
{
"expr": "sum(rate(execution_status_total{status_type!=\"completed\"}[1m]))",
"instant": true,
"interval": "",
"legendFormat": "",
"refId": "A"
}
],
"timeFrom": null,
"timeShift": null,
"title": "Execution Error Rate",
"type": "stat"
},
{
"aliasColors": {
"Inf Error Rate": "semi-dark-red",
"Inf Success Rate": "light-green"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {}
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 12,
"x": 0,
"y": 13
},
"hiddenSeries": false,
"id": 32,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "sum(rate(execution_status_total{status_type=\"completed\"}[1m]))",
"interval": "",
"legendFormat": "Execution Success Rate",
"refId": "A"
},
{
"expr": "sum(rate(execution_status_total{status_type!=\"completed\"}[1m]))",
"interval": "",
"legendFormat": "Execution Error Rate",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Execution Status Rates",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:547",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:548",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"p0": "dark-green",
"p1": "semi-dark-green",
"p100": "semi-dark-red",
"p25": "light-green",
"p50": "super-light-green",
"p75": "super-light-red",
"p99": "light-red",
"{percentile=\"p0\"}": "dark-green",
"{percentile=\"p1\"}": "semi-dark-green",
"{percentile=\"p100\"}": "dark-red",
"{percentile=\"p25\"}": "light-green",
"{percentile=\"p50\"}": "super-light-green",
"{percentile=\"p75\"}": "light-red",
"{percentile=\"p99\"}": "semi-dark-red"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"description": "",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "s"
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 12,
"x": 12,
"y": 13
},
"hiddenSeries": false,
"id": 34,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 1,
"points": true,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg by (percentile) (execution_latency_seconds)",
"interval": "",
"legendFormat": "{{percentile}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Execution Latency",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:61",
"format": "s",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:62",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 0,
"y": 25
},
"hiddenSeries": false,
"id": 30,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg by (neuroncore) (neuroncore_utilization_ratio)",
"interval": "",
"legendFormat": "nc{{neuroncore}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "NeuronCore Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:493",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:494",
"format": "short",
"label": null,
"logBase": 1,
"max": "100",
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"Runtime system CPU usage ": "light-red",
"Runtime user CPU usage ": "light-green"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"mappings": [],
"thresholds": {
"mode": "absolute",
"steps": [
{
"color": "green",
"value": null
},
{
"color": "red",
"value": 80
}
]
},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 8,
"y": 25
},
"hiddenSeries": false,
"id": 2,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "avg by (usage_type) (neuron_runtime_vcpu_usage_ratio)",
"format": "time_series",
"instant": false,
"interval": "",
"legendFormat": "Neuron Runtime {{usage_type}} CPU usage ",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Neuron Runtime vCPU Usage",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:385",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:386",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"host": "rgb(0, 217, 255)",
"neuron_device": "super-light-orange"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "bytes"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 16,
"y": 25
},
"hiddenSeries": false,
"id": 28,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg by (memory_location) (sum by (instance_id, memory_location) (neuron_runtime_memory_used_bytes))",
"interval": "",
"legendFormat": "{{memory_location}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Neuron Runtime Used Memory",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:439",
"format": "bytes",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"$$hashKey": "object:440",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"Memory Usage": "rgb(0, 217, 255)",
"NeuronCore Usage": "light-orange",
"vCPU Usage": "light-blue"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": null,
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 0,
"y": 37
},
"hiddenSeries": false,
"id": 22,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(system_memory_used_bytes / system_memory_total_bytes)",
"instant": false,
"interval": "",
"legendFormat": "Memory Usage",
"refId": "A"
},
{
"expr": "avg(sum by (instance_id) (system_vcpu_usage_ratio))",
"instant": false,
"interval": "",
"legendFormat": "vCPU Usage",
"refId": "B"
},
{
"expr": "avg(neuroncore_utilization_ratio)",
"instant": false,
"interval": "",
"legendFormat": "NeuronCore Usage",
"refId": "C"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Host System Utilization",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:664",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:665",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"system": "light-red",
"user": "light-green"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "percentunit"
},
"overrides": []
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 8,
"y": 37
},
"hiddenSeries": false,
"id": 24,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": true,
"steppedLine": false,
"targets": [
{
"expr": "avg by (usage_type) (system_vcpu_usage_ratio)",
"interval": "",
"legendFormat": "{{usage_type}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Host vCPU Usage",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:876",
"format": "percentunit",
"label": null,
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:877",
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {
"Memory Usage Bytes": "rgb(223, 180, 0)",
"Memory Usage Percent": "rgb(0, 217, 255)"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "Prometheus",
"fieldConfig": {
"defaults": {
"custom": {},
"unit": "short"
},
"overrides": [
{
"matcher": {
"id": "byName",
"options": "Memory Usage Percent"
},
"properties": [
{
"id": "unit",
"value": "percentunit"
}
]
},
{
"matcher": {
"id": "byName",
"options": "Memory Usage Bytes"
},
"properties": [
{
"id": "unit",
"value": "bytes"
}
]
}
]
},
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 12,
"w": 8,
"x": 16,
"y": 37
},
"hiddenSeries": false,
"id": 26,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "7.2.1",
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [
{
"$$hashKey": "object:711"
},
{
"$$hashKey": "object:931",
"alias": "Memory Usage Bytes",
"yaxis": 2
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "avg(system_memory_used_bytes / system_memory_total_bytes)",
"instant": false,
"interval": "",
"legendFormat": "Memory Usage Percent",
"refId": "A"
},
{
"expr": "avg(system_memory_used_bytes)",
"instant": false,
"interval": "",
"legendFormat": "Memory Usage Bytes",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Host Memory Usage",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:689",
"format": "percentunit",
"label": "",
"logBase": 1,
"max": "1",
"min": "0",
"show": true
},
{
"$$hashKey": "object:690",
"decimals": null,
"format": "bytes",
"label": "",
"logBase": 1,
"max": null,
"min": "0",
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"refresh": "5s",
"schemaVersion": 26,
"style": "dark",
"tags": [],
"templating": {
"list": [
{
"datasource": "Prometheus",
"filters": [],
"hide": 0,
"label": "",
"name": "Filters",
"skipUrlSync": false,
"type": "adhoc"
}
]
},
"time": {
"from": "now-6h",
"to": "now"
},
"timepicker": {},
"timezone": "",
"title": "neuron-monitor",
"uid": "EqWNYf5Mz",
"version": 68
}</pre></body></html>
|
2023-09-29T20:54:59.946Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuron-sys-tools/neuron-sysfs-user-guide.rst.txt
|
```
.. _neuron-sysfs-ug:
Neuron Sysfs User Guide
=======================
.. contents:: Table of contents
:local:
:depth: 3
Introduction
------------
The kernel provides a few ways in which userspace programs can get system information from the kernel space. Sysfs is one common way to do so. It is a virtual filesystem typically mounted on the ``/sys`` directory and contains information about hardware devices attached to the system and about drivers handling those devices. By navigating the hierarchical structure of the sysfs filesystem and viewing the information provided by its files and directories, you can gather valuable information that can help diagnose and resolve a wide range of hardware and system issues.
Thus a sysfs filesystem is set up per Neuron Device under ``/sys/devices/virtual/neuron_device`` to give you an insight into the Neuron Driver and Runtime at system level. By performing several simple CLIs such as reading or writing to a sysfs file, you can get information such as Runtime status, memory usage, Driver info etc. You can even create your own shell scripts to query Runtime and Driver statistics from sysfs and generate customized reports.
This user guide will first explain the Neuron sysfs structure and then introduce many ways where you can perform diagnostic works with Neuron sysfs.
Neuron Sysfs Filesystem Structure
---------------------------------
High Level Overview
^^^^^^^^^^^^^^^^^^^
Here is the high level structure of the Neuron sysfs filesystem, where the total and present counters are not shown:
.. code-block:: bash
/sys/devices/virtual/neuron_device/
├── neuron0/
│ ├── subsystem
│ ├── uevent
│ ├── connected_devices
│ ├── core_count
│ ├── reset
│ ├── power/
│ │ ├── async
│ │ ├── control
│ │ ├── runtime_active_time
│ │ ├── runtime_active_kids
│ │ └── ...
│ ├── info/
│ │ ├── notify_delay
│ │ └── architecture/
│ │ ├── arch_type
│ │ ├── device_name
│ │ └── instance_type
│ ├── stats/
│ │ └── memory_usage/
│ │ └── host_mem/
│ │ ├── application_memory
│ │ ├── constants
│ │ ├── dma_buffers
│ │ └── tensors
│ ├── neuron_core0/
│ │ ├── info/
│ │ │ └── architecture/
│ │ │ └── arch_type
│ │ ├── stats/
│ │ │ ├── status/
│ │ │ │ ├── exec_bad_input
│ │ │ │ ├── hw_error
│ │ │ │ ├── infer_failed_to_queue
│ │ │ │ ├── resource_nc_error
│ │ │ │ ├── unsupported_neff_version
│ │ │ │ ├── failure
│ │ │ │ ├── infer_completed_with_error
│ │ │ │ ├── invalid_error
│ │ │ │ ├── success
│ │ │ │ ├── generic_error
│ │ │ │ ├── infer_completed_with_num_error
│ │ │ │ ├── resource_error
│ │ │ │ └── timeout
│ │ │ ├── memory_usage/
│ │ │ │ ├── device_mem/
│ │ │ │ │ ├── constants
│ │ │ │ │ ├── model_code
│ │ │ │ │ ├── model_shared_scratchpad
│ │ │ │ │ ├── runtime_memory
│ │ │ │ │ └── tensors
│ │ │ │ └── host_mem
│ │ │ └── other_info/
│ │ │ ├── flop_count
│ │ │ ├── inference_count
│ │ │ ├── model_load_count
│ │ │ └── reset_count
│ │ └── ...
│ ├── neuron_core1/
│ │ ├── info/
│ │ │ └── ...
│ │ └── stats/
│ │ └── ...
│ └── ...
├── neuron1
├── neuron2
├── neuron3
└── ...
Each Neuron Device is represented as a directory under ``/sys/devices/virtual/neuron_device/``, where ``neuron0/`` represents Neuron Device 0, ``neuron1/`` represents Neuron Device 1, etc. Each NeuronCore is represented as a directory under a Neuron Device directory, represented as ``neuron_core{0,1,2,...}``. Metrics such as Runtime and Driver info and statistics are collected as per NeuronCore in two directories under the NeuronCore directory, i.e. ``info/`` and ``stats/``.
Most of the metrics belong to a category called “counter.”
Each counter is represented as a directory, which holds two numerical values as two files: total and present. Each memory usage counter has an additional value called peak.
The total value starts accumulating metrics when the Driver is loaded. The present value records the last changed metric value. The peak value records the max value so far.
Each counter has the same filesystem structure like this:
.. code-block:: dash
/sys/devices/virtual/neuron_device/neuron0/neuron_core0/status/
├── exec_bad_input/
│ ├── total
│ └── present
├── hw_error/
│ ├── total
│ └── present
├── infer_failed_to_queue/
│ ├── total
│ └── present
└── ...
Description for Each Metric
^^^^^^^^^^^^^^^^^^^^^^^^^^^
``info/``: this directory stores hardware information. All of them are not counter types:
* ``notify_delay``: Controls delays between notifications from Neuron Device. Current settings are on (``0``) or off (``-1``). Off by default.
* ``arch_type``: Architecture type of the Neuron Device. Sample architecture types are v1, v2, and v3. You can only read the value but not change it.
* ``instance_type``: Instance type of the Neuron Device. Sample instance types are Inf1, Inf2, and Trn1. You can only read the value but not change it.
* ``device_type``: Neuron Device type. Sample Neuron Device types are Inferentia, Inferentia2, and Trainium1. You can only read the value but not change it.
``stats/``: this directory stores Neuron Runtime and Driver statistics. It contains three subdirectories: ``status/``, ``memory_usage/``, and ``other_info/``.
* ``status/``: this directory stores the number of each return status of API calls. As explained in :ref:`The LIBNRT API Return Codes <nrt_api>`, every API call returns an NRT_STATUS value, which represents the return status of that API call. Our sysfs filesystem stores all ``NRT_STATUS`` as subdirectories under the ``status/`` directory. They all have the counter structure. Thus each ``NRT_STATUS`` subdirectory holds two values (total and present) and records the number of times you receive a certain ``NRT_STATUS``. The following is description for each ``NRT_STATUS`` subdirectory. You should see the description align with what is described in :ref:`The LIBNRT API Return Codes <nrt_api>`.
* ``memory_usage/``: this directory contains memory usage statistics for both device and host, represented as counters. In this directory, the total counters indicate the current memory usage, present counters represent the memory allocation or deallocation amount in the previous operation, and peak counters indicate the maximum memory usage observed. Additionally, this directory provides detailed breakdown statistics for device and host memory usage. These memory breakdown details correspond to the :ref:`Memory Usage Summary <neuron_top_mem_usage>` section displayed on in Neuron Monitor.
* ``device_mem/``: the amount of memory that Neuron Runtime uses for weights, instructions and DMA rings.
* This device memory per Neuron Core is further categorized into five types: ``constants/``, ``model_code/``, ``model_shared_scratchpad/``, ``runtime_memory/``, and ``tensors/``. Definitions for these categories can be found in the :ref:`Device Used Memory <neuron_top_device_mem_usage>` section. Each of these categories has total, present, and peak.
* ``host_mem/``: the amount of memory that Neuron Runtime uses for input and output tensors.
* The host memory per Neuron Device is further categorized into four types: ``application_memory/``, ``constants/``, ``dma_buffers/``, and ``tensors/``. Definitions for these categories can be found in the :ref:`Host Used Memory <neuron_top_host_mem_usage>` section. Each of these categories has total, present, and peak
* ``other_info/``: this directory contains statistics that are not included by ``status/`` and ``memory_usage/``. All of them are not counter types:
* ``flop_count``: number of flops. You can use it to calculate the TFLOP/s by ``flop_count`` / time interval
* ``inference_count``: number of successful inferences
* ``model_load_count``: number of successful model loads
* ``reset_count``: number of successful device resets
Other metrics:
* ``connected_devices``: a list of connected devices' ids. You should see the same output as neuron-ls's CONNECTED DEVICES.
Read and Write to Metrics
^^^^^^^^^^^^^^^^^^^^^^^^^
Reading a sysfs file gives the value for the corresponding metric. You can use the cat command to view the contents of the sysfs files.:
.. code-block:: bash
ubuntu@ip-xxx-xx-xx-xxx:~$ sudo cat /sys/devices/virtual/neuron_device/neuron0/neuron_core0/stats/status/failure/total
0
ubuntu@ip-xxx-xx-xx-xxx:~$ sudo cat /sys/devices/virtual/neuron_device/neuron0/neuron_core0/info/architecture/arch_type
NCv2
Sysfs metrics of counter type are write to clear. You can write any value to the file, and the metric will be set to 0:
.. code-block:: bash
ubuntu@ip-xxx-xx-xx-xxx:~$ echo 1 | sudo tee /sys/devices/virtual/neuron_device/neuron0/neuron_core0/stats/status/failure/total
1
Note
^^^^
All files under ``/sys/devices/virtual/neuron_device/neuron0/power`` such as ``runtime_active_kids`` or ``runtime_status`` are related to generic device power management. They are not created or controlled by our sysfs metrics. The word ``runtime`` in these files does not refer to Neuron Runtime.
.. _troubleshoot_via_sysfs:
How to Troubleshoot via Sysfs
-----------------------------
You can perform simple and easy tasks to troubleshoot your ML jobs with one or a few CLIs to read or write the sysfs filesystem.
You can do aggregations across all NeuronCores and all Neuron Device to get a summarized view using your scripts.
You can also use the Sysfs notification feature to wait passively (without wasting CPU cycles) for changes to the values of Sysfs files. To use this feature, you need to implement a user-space program that calls the poll() function on the Sysfs file that you want to wait on.
The poll() function has the following signature: ``unsigned int (*poll) (struct file *, struct poll_table_struct *)``.
By default, the Sysfs notification feature is turned off when the driver is loaded. To enable notifications, you can set the value of ``/sys/devices/virtual/neuron_device/neuron0/info/notify_delay`` to 0. To disable notifications, you can set it to -1. Please note that enabling this feature can impact performance.
Here is a sample user space program using poll():
.. code-block:: dash
#include <fcntl.h>
#include <poll.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char * argv[])
{
char readbuf[128];
int attr_fd = -1;
struct pollfd pfd;
int retval = 0;
ssize_t read_bytes;
if (argc < 2) {
fprintf(stderr, "Error: Please specify sysfs file path\n");
exit(1);
}
attr_fd = open(argv[1], O_RDONLY, 0);
if (attr_fd < 0) {
perror(argv[1]);
exit(2);
}
read_bytes = read(attr_fd, readbuf, sizeof(readbuf));
if (read_bytes < 0) {
perror(argv[1]);
exit(3);
}
printf("%.*s", (int)read_bytes, readbuf);
pfd.fd = attr_fd;
pfd.events = POLLERR | POLLPRI;
pfd.revents = 0;
while ((retval = poll(&pfd, 1, 100)) >= 0) {
if (pfd.revents & (POLLERR | POLLPRI)) {
pfd.revents = 0;
lseek(attr_fd, 0, SEEK_SET);
read_bytes = read(attr_fd, readbuf, sizeof(readbuf));
if (read_bytes < 0) {
perror(argv[1]);
exit(4);
}
printf("%.*s", (int)read_bytes, readbuf);
}
}
return 0;
}
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-sysfs-ug:
Neuron Sysfs User Guide
=======================
.. contents:: Table of contents
:local:
:depth: 3
Introduction
------------
The kernel provides a few ways in which userspace programs can get system information from the kernel space. Sysfs is one common way to do so. It is a virtual filesystem typically mounted on the ``/sys`` directory and contains information about hardware devices attached to the system and about drivers handling those devices. By navigating the hierarchical structure of the sysfs filesystem and viewing the information provided by its files and directories, you can gather valuable information that can help diagnose and resolve a wide range of hardware and system issues.
Thus a sysfs filesystem is set up per Neuron Device under ``/sys/devices/virtual/neuron_device`` to give you an insight into the Neuron Driver and Runtime at system level. By performing several simple CLIs such as reading or writing to a sysfs file, you can get information such as Runtime status, memory usage, Driver info etc. You can even create your own shell scripts to query Runtime and Driver statistics from sysfs and generate customized reports.
This user guide will first explain the Neuron sysfs structure and then introduce many ways where you can perform diagnostic works with Neuron sysfs.
Neuron Sysfs Filesystem Structure
---------------------------------
High Level Overview
^^^^^^^^^^^^^^^^^^^
Here is the high level structure of the Neuron sysfs filesystem, where the total and present counters are not shown:
.. code-block:: bash
/sys/devices/virtual/neuron_device/
├── neuron0/
│ ├── subsystem
│ ├── uevent
│ ├── connected_devices
│ ├── core_count
│ ├── reset
│ ├── power/
│ │ ├── async
│ │ ├── control
│ │ ├── runtime_active_time
│ │ ├── runtime_active_kids
│ │ └── ...
│ ├── info/
│ │ ├── notify_delay
│ │ └── architecture/
│ │ ├── arch_type
│ │ ├── device_name
│ │ └── instance_type
│ ├── stats/
│ │ └── memory_usage/
│ │ └── host_mem/
│ │ ├── application_memory
│ │ ├── constants
│ │ ├── dma_buffers
│ │ └── tensors
│ ├── neuron_core0/
│ │ ├── info/
│ │ │ └── architecture/
│ │ │ └── arch_type
│ │ ├── stats/
│ │ │ ├── status/
│ │ │ │ ├── exec_bad_input
│ │ │ │ ├── hw_error
│ │ │ │ ├── infer_failed_to_queue
│ │ │ │ ├── resource_nc_error
│ │ │ │ ├── unsupported_neff_version
│ │ │ │ ├── failure
│ │ │ │ ├── infer_completed_with_error
│ │ │ │ ├── invalid_error
│ │ │ │ ├── success
│ │ │ │ ├── generic_error
│ │ │ │ ├── infer_completed_with_num_error
│ │ │ │ ├── resource_error
│ │ │ │ └── timeout
│ │ │ ├── memory_usage/
│ │ │ │ ├── device_mem/
│ │ │ │ │ ├── constants
│ │ │ │ │ ├── model_code
│ │ │ │ │ ├── model_shared_scratchpad
│ │ │ │ │ ├── runtime_memory
│ │ │ │ │ └── tensors
│ │ │ │ └── host_mem
│ │ │ └── other_info/
│ │ │ ├── flop_count
│ │ │ ├── inference_count
│ │ │ ├── model_load_count
│ │ │ └── reset_count
│ │ └── ...
│ ├── neuron_core1/
│ │ ├── info/
│ │ │ └── ...
│ │ └── stats/
│ │ └── ...
│ └── ...
├── neuron1
├── neuron2
├── neuron3
└── ...
Each Neuron Device is represented as a directory under ``/sys/devices/virtual/neuron_device/``, where ``neuron0/`` represents Neuron Device 0, ``neuron1/`` represents Neuron Device 1, etc. Each NeuronCore is represented as a directory under a Neuron Device directory, represented as ``neuron_core{0,1,2,...}``. Metrics such as Runtime and Driver info and statistics are collected as per NeuronCore in two directories under the NeuronCore directory, i.e. ``info/`` and ``stats/``.
Most of the metrics belong to a category called “counter.”
Each counter is represented as a directory, which holds two numerical values as two files: total and present. Each memory usage counter has an additional value called peak.
The total value starts accumulating metrics when the Driver is loaded. The present value records the last changed metric value. The peak value records the max value so far.
Each counter has the same filesystem structure like this:
.. code-block:: dash
/sys/devices/virtual/neuron_device/neuron0/neuron_core0/status/
├── exec_bad_input/
│ ├── total
│ └── present
├── hw_error/
│ ├── total
│ └── present
├── infer_failed_to_queue/
│ ├── total
│ └── present
└── ...
Description for Each Metric
^^^^^^^^^^^^^^^^^^^^^^^^^^^
``info/``: this directory stores hardware information. All of them are not counter types:
* ``notify_delay``: Controls delays between notifications from Neuron Device. Current settings are on (``0``) or off (``-1``). Off by default.
* ``arch_type``: Architecture type of the Neuron Device. Sample architecture types are v1, v2, and v3. You can only read the value but not change it.
* ``instance_type``: Instance type of the Neuron Device. Sample instance types are Inf1, Inf2, and Trn1. You can only read the value but not change it.
* ``device_type``: Neuron Device type. Sample Neuron Device types are Inferentia, Inferentia2, and Trainium1. You can only read the value but not change it.
``stats/``: this directory stores Neuron Runtime and Driver statistics. It contains three subdirectories: ``status/``, ``memory_usage/``, and ``other_info/``.
* ``status/``: this directory stores the number of each return status of API calls. As explained in :ref:`The LIBNRT API Return Codes <nrt_api>`, every API call returns an NRT_STATUS value, which represents the return status of that API call. Our sysfs filesystem stores all ``NRT_STATUS`` as subdirectories under the ``status/`` directory. They all have the counter structure. Thus each ``NRT_STATUS`` subdirectory holds two values (total and present) and records the number of times you receive a certain ``NRT_STATUS``. The following is description for each ``NRT_STATUS`` subdirectory. You should see the description align with what is described in :ref:`The LIBNRT API Return Codes <nrt_api>`.
* ``memory_usage/``: this directory contains memory usage statistics for both device and host, represented as counters. In this directory, the total counters indicate the current memory usage, present counters represent the memory allocation or deallocation amount in the previous operation, and peak counters indicate the maximum memory usage observed. Additionally, this directory provides detailed breakdown statistics for device and host memory usage. These memory breakdown details correspond to the :ref:`Memory Usage Summary <neuron_top_mem_usage>` section displayed on in Neuron Monitor.
* ``device_mem/``: the amount of memory that Neuron Runtime uses for weights, instructions and DMA rings.
* This device memory per Neuron Core is further categorized into five types: ``constants/``, ``model_code/``, ``model_shared_scratchpad/``, ``runtime_memory/``, and ``tensors/``. Definitions for these categories can be found in the :ref:`Device Used Memory <neuron_top_device_mem_usage>` section. Each of these categories has total, present, and peak.
* ``host_mem/``: the amount of memory that Neuron Runtime uses for input and output tensors.
* The host memory per Neuron Device is further categorized into four types: ``application_memory/``, ``constants/``, ``dma_buffers/``, and ``tensors/``. Definitions for these categories can be found in the :ref:`Host Used Memory <neuron_top_host_mem_usage>` section. Each of these categories has total, present, and peak
* ``other_info/``: this directory contains statistics that are not included by ``status/`` and ``memory_usage/``. All of them are not counter types:
* ``flop_count``: number of flops. You can use it to calculate the TFLOP/s by ``flop_count`` / time interval
* ``inference_count``: number of successful inferences
* ``model_load_count``: number of successful model loads
* ``reset_count``: number of successful device resets
Other metrics:
* ``connected_devices``: a list of connected devices' ids. You should see the same output as neuron-ls's CONNECTED DEVICES.
Read and Write to Metrics
^^^^^^^^^^^^^^^^^^^^^^^^^
Reading a sysfs file gives the value for the corresponding metric. You can use the cat command to view the contents of the sysfs files.:
.. code-block:: bash
ubuntu@ip-xxx-xx-xx-xxx:~$ sudo cat /sys/devices/virtual/neuron_device/neuron0/neuron_core0/stats/status/failure/total
0
ubuntu@ip-xxx-xx-xx-xxx:~$ sudo cat /sys/devices/virtual/neuron_device/neuron0/neuron_core0/info/architecture/arch_type
NCv2
Sysfs metrics of counter type are write to clear. You can write any value to the file, and the metric will be set to 0:
.. code-block:: bash
ubuntu@ip-xxx-xx-xx-xxx:~$ echo 1 | sudo tee /sys/devices/virtual/neuron_device/neuron0/neuron_core0/stats/status/failure/total
1
Note
^^^^
All files under ``/sys/devices/virtual/neuron_device/neuron0/power`` such as ``runtime_active_kids`` or ``runtime_status`` are related to generic device power management. They are not created or controlled by our sysfs metrics. The word ``runtime`` in these files does not refer to Neuron Runtime.
.. _troubleshoot_via_sysfs:
How to Troubleshoot via Sysfs
-----------------------------
You can perform simple and easy tasks to troubleshoot your ML jobs with one or a few CLIs to read or write the sysfs filesystem.
You can do aggregations across all NeuronCores and all Neuron Device to get a summarized view using your scripts.
You can also use the Sysfs notification feature to wait passively (without wasting CPU cycles) for changes to the values of Sysfs files. To use this feature, you need to implement a user-space program that calls the poll() function on the Sysfs file that you want to wait on.
The poll() function has the following signature: ``unsigned int (*poll) (struct file *, struct poll_table_struct *)``.
By default, the Sysfs notification feature is turned off when the driver is loaded. To enable notifications, you can set the value of ``/sys/devices/virtual/neuron_device/neuron0/info/notify_delay`` to 0. To disable notifications, you can set it to -1. Please note that enabling this feature can impact performance.
Here is a sample user space program using poll():
.. code-block:: dash
#include <fcntl.h>
#include <poll.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char * argv[])
{
char readbuf[128];
int attr_fd = -1;
struct pollfd pfd;
int retval = 0;
ssize_t read_bytes;
if (argc < 2) {
fprintf(stderr, "Error: Please specify sysfs file path\n");
exit(1);
}
attr_fd = open(argv[1], O_RDONLY, 0);
if (attr_fd < 0) {
perror(argv[1]);
exit(2);
}
read_bytes = read(attr_fd, readbuf, sizeof(readbuf));
if (read_bytes < 0) {
perror(argv[1]);
exit(3);
}
printf("%.*s", (int)read_bytes, readbuf);
pfd.fd = attr_fd;
pfd.events = POLLERR | POLLPRI;
pfd.revents = 0;
while ((retval = poll(&pfd, 1, 100)) >= 0) {
if (pfd.revents & (POLLERR | POLLPRI)) {
pfd.revents = 0;
lseek(attr_fd, 0, SEEK_SET);
read_bytes = read(attr_fd, readbuf, sizeof(readbuf));
if (read_bytes < 0) {
perror(argv[1]);
exit(4);
}
printf("%.*s", (int)read_bytes, readbuf);
}
}
return 0;
}
</pre></body></html>
|
2023-09-29T20:55:00.035Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuron-sys-tools/nccom-test.rst.txt
|
```
.. _nccom-test:
=================
NCCOM-TEST User Guide
=================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
**nccom-test** is a benchmarking tool for quickly evaluating the performance of Collective Communication operations
on one or more Neuron instances (it is compatible with both trn1 and inf2 instance types) or just for a fast sanity check
of the environment before attempting to run a more complex workload.
.. note::
On inf2 instances, only single-instance benchmarking is supported. Running a multi-node nccom-test benchmark
will result in an error.
Using nccom-test
----------------
Here is a simple example which will run a 2 worker (ranks) all-reduce with a total size of 32MB:
.. code-block::
nccom-test -r 2 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
33554432 33554432 uint8 768 40.69 40.69
Avg bus bandwidth: 40.6901GB/s
Output description
^^^^^^^^^^^^^^^^^^
The command will output a table containing several columns containing performance metrics.
There will be a line for every requested data size (by default the data size is 32MB as
seen in the previous example).
.. list-table::
:widths: 40 260
:header-rows: 1
* - Column name
- Description
* - size(B)
- Size in bytes for the data involved in this operation
* - count(elems)
- Number of elements in the data involved in this operation. For example, if **size(B)** is 4 and **type** is fp32,
then **count** will be 1 since one single fp32 element has been processed.
* - type
- Data type for the processed data. Can be: **uint8**, **int8**, **uint16**, **int16**, **fp16**, **bf16**, **int32**, **uint32**, **fp32**
* - time(us)
- Time in microseconds representing the P50 of all durations for the Collective Communication operations executed during the benchmark.
* - algbw(GB/s)
- Algorithm bandwidth in gibibytes (1GiB = 1,073,741,824 bytes) per second which is calculated as **size(B)** / **time(us)**
* - busbw(GB/s)
- Bus bandwidth - bandwidth per data line in gibibytes per second - it provides a bandwidth number that is independent from the number of ranks (unlike **algbw**).
For a more in-depth explanation on bus Bandwidth, please refer to `NVIDIA's nccl-tests documentation <https://github.com/NVIDIA/nccl-tests/blob/master/doc/PERFORMANCE.md>`_.
* - Avg bus bandwidth
- Average of the values in the busbw column
CLI arguments
^^^^^^^^^^^^^
.. list-table::
:widths: 40 80 260
:header-rows: 1
* - Argument
- Default value
- Description
* - <cc operation>
- N/A, required argument
- The type of Collective Communication operation to execute for this benchmark.
Supported types:
- ``all_reduce`` / ``allr``: All-Reduce
- ``all_gather`` / ``allg``: All-Gather
- ``reduce_scatter`` / ``redsct``: Reduce-Scatter
- ``sendrecv``: Send-Receive
* - ``-r, --nworkers``
- N/A, required argument
- Total number of workers (ranks) to use
* - ``-N, --nnodes``
- 1
- Total number of nodes (instances) to use. The number of workers will be divided equally across all nodes.
If this argument is greater than 1, the **NEURON_RT_ROOT_COMM_ID** environment variable needs to be set to
the host address of the instance **nccom-test** is ran on, and a free port number
(for example: ``NEURON_RT_ROOT_COMM_ID=10.0.0.1:44444``). Additionally, either ``-s, --hosts`` needs to be provided
or a ``~/hosts`` file needs to exist - for more details refer to the ``-s,--hosts`` description below.
* - ``-b, --minbytes``
- 32M
- The starting size for the benchmark
* - ``-e, --maxbytes``
- 32M
- The end size for the benchmark. **nccom-test** will run benchmarks for all sizes between ``-b, --minbytes`` and
``-e, --maxbytes``, increasing the size by either ``-i, --stepbytes`` or ``--f, --stepfactor`` with every run.
* - ``-i, --stepbytes``
- (``--maxbytes`` - ``--minbytes``) / 10
- Amount of bytes with which to increase the benchmark's size on every subsequent run.
For example, for this combination of arguments: ``-b 8 -e 16 -i 4``, the benchmark will
be ran for the following sizes: 8 bytes, 12 bytes, 16 bytes.
* - ``-f, --stepfactor``
- N/A
- Factor with which to increase the benchmark's size on every subsequent run.
For example, for this combination of argument values: ``-b 8 -e 32 -f 2``, the benchmark will
be ran for the following sizes: 8 bytes, 16 bytes, 32 bytes.
* - ``-n, --iters``
- 20
- Number of Collective Communication operations to execute during the benchmark.
* - ``-w, --warmup_iters``
- 5
- Number of Collective Communication operations to execute as warmup during the benchmark
(which won't be counted towards the result).
* - ``-d, --datatype``
- ``uint8``
- Data type for the data used by the benchmark. Supported types: ``uint8``, ``int8``, ``uint16``, ``int16``,
``fp16``, ``bf16``, ``uint32``, ``int32``, ``fp32``. Input data will be zero filled, unless ``--check`` is
provided (currently, only available for ``--datatype fp32``) in which case it will be filled by a repetead
value of the requested type.
* - ``-c, --check``
- false
- If provided, the corectness of the operations will be checked. This will not impact results (time, algbw and busbw)
but will slightly increase the overall execution time.
.. note::
All arguments that take a size in bytes will also accept larger size units, for example:
``-f 2048`` can be written as ``-f 2kb`` or ``-f 1048576`` can be written as ``-f 1MB``.
Examples
^^^^^^^^
.. note::
Performance data shown in these examples should not be considered up-to-date. For the latest performance
data, please refer to the performance section.
Single Instance Examples
~~~~~~~~~~~~~~~~~~~~~~~~
- Quick environment validation
.. code-block::
nccom-test -r 2 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
33554432 33554432 uint8 768 40.69 40.69
Avg bus bandwidth: 40.6901GB/s
If a problem was found, it can be reported in two possible ways:
- Immediately:
.. code-block::
nccom-test -r 2 allr
Neuron DKMS Driver is not running! Read the troubleshooting guide at: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-runtime/nrt-troubleshoot.html#neuron-driver-installation-fails
- After a benchmark attempt:
.. code-block::
nccom-test -r 2 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
33554432 Failure running neuron-bench - log file /tmp/nccom_test_log_7pqpdfjf.log
1 errors found - test failed
In this case, further information about the error can be found in the ``neuron-bench`` log file.
- 2 rank all-reduce on a single instance for sizes ranging from 1MiB to 1GiB with a step of 4x
.. code-block::
nccom-test -r 2 --minbytes 1kb --maxbytes 1gb --stepfactor 4 --datatype fp32 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
1024 256 fp32 58 0.02 0.02
4096 1024 fp32 58 0.07 0.07
16384 4096 fp32 58 0.26 0.26
65536 16384 fp32 58 1.05 1.05
262144 65536 fp32 60 4.07 4.07
1048576 262144 fp32 68 14.36 14.36
4194304 1048576 fp32 107 36.51 36.51
16777216 4194304 fp32 332 47.06 47.06
67108864 16777216 fp32 1214 51.48 51.48
268435456 67108864 fp32 4750 52.63 52.63
1073741824 268435456 fp32 18930 52.83 52.83
Avg bus bandwidth: 23.6671GB/s
- 32 rank all-gather on a single instance for sizes ranging from 1KiB to 1MiB with a step of 8x, with correctness checking
.. code-block::
nccom-test -r 32 --minbytes 1kb --maxbytes 1mb --stepfactor 8 --datatype fp32 --check allg
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
1024 256 fp32 151 0.01 0.01
8192 2048 fp32 149 0.05 0.05
65536 16384 fp32 150 0.41 0.39
524288 131072 fp32 179 2.73 2.64
Avg bus bandwidth: 0.7731GB/s
Multiple Instances Example
~~~~~~~~~~~~~~~~~~~~~~~~~~
- 64 rank all-reduce on two instances for sizes ranging from 8 bytes to 1GiB with a step of 2x, running 50 ops
.. code-block::
NEURON_RT_ROOT_COMM_ID=10.1.4.145:45654 nccom-test -r 64 -N 2 -b 8 -e 1GB -f 2 -n 50 -w 5 -d fp32 allr --hosts 127.0.0.1 10.1.4.138
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
8 2 fp32 520 0.00 0.00
16 4 fp32 520 0.00 0.00
32 8 fp32 523 0.00 0.00
64 16 fp32 525 0.00 0.00
128 32 fp32 553 0.00 0.00
256 64 fp32 709 0.00 0.00
512 128 fp32 782 0.00 0.00
1024 256 fp32 840 0.00 0.00
2048 512 fp32 881 0.00 0.00
4096 1024 fp32 916 0.00 0.01
8192 2048 fp32 1013 0.01 0.01
16384 4096 fp32 1031 0.01 0.03
32768 8192 fp32 1174 0.03 0.05
65536 16384 fp32 1315 0.05 0.09
131072 32768 fp32 1315 0.09 0.18
262144 65536 fp32 1311 0.19 0.37
524288 131072 fp32 1312 0.37 0.73
1048576 262144 fp32 1328 0.74 1.45
2097152 524288 fp32 1329 1.47 2.89
4194304 1048576 fp32 1378 2.83 5.58
8388608 2097152 fp32 1419 5.51 10.84
16777216 4194304 fp32 2138 7.31 14.39
33554432 8388608 fp32 2711 11.53 22.69
67108864 16777216 fp32 3963 15.77 31.05
134217728 33554432 fp32 6279 19.91 39.19
268435456 67108864 fp32 11954 20.91 41.17
536870912 134217728 fp32 21803 22.93 45.15
1073741824 268435456 fp32 41806 23.92 47.09
Avg bus bandwidth: 9.3924GB/s
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _nccom-test:
=================
NCCOM-TEST User Guide
=================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
**nccom-test** is a benchmarking tool for quickly evaluating the performance of Collective Communication operations
on one or more Neuron instances (it is compatible with both trn1 and inf2 instance types) or just for a fast sanity check
of the environment before attempting to run a more complex workload.
.. note::
On inf2 instances, only single-instance benchmarking is supported. Running a multi-node nccom-test benchmark
will result in an error.
Using nccom-test
----------------
Here is a simple example which will run a 2 worker (ranks) all-reduce with a total size of 32MB:
.. code-block::
nccom-test -r 2 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
33554432 33554432 uint8 768 40.69 40.69
Avg bus bandwidth: 40.6901GB/s
Output description
^^^^^^^^^^^^^^^^^^
The command will output a table containing several columns containing performance metrics.
There will be a line for every requested data size (by default the data size is 32MB as
seen in the previous example).
.. list-table::
:widths: 40 260
:header-rows: 1
* - Column name
- Description
* - size(B)
- Size in bytes for the data involved in this operation
* - count(elems)
- Number of elements in the data involved in this operation. For example, if **size(B)** is 4 and **type** is fp32,
then **count** will be 1 since one single fp32 element has been processed.
* - type
- Data type for the processed data. Can be: **uint8**, **int8**, **uint16**, **int16**, **fp16**, **bf16**, **int32**, **uint32**, **fp32**
* - time(us)
- Time in microseconds representing the P50 of all durations for the Collective Communication operations executed during the benchmark.
* - algbw(GB/s)
- Algorithm bandwidth in gibibytes (1GiB = 1,073,741,824 bytes) per second which is calculated as **size(B)** / **time(us)**
* - busbw(GB/s)
- Bus bandwidth - bandwidth per data line in gibibytes per second - it provides a bandwidth number that is independent from the number of ranks (unlike **algbw**).
For a more in-depth explanation on bus Bandwidth, please refer to `NVIDIA's nccl-tests documentation <https://github.com/NVIDIA/nccl-tests/blob/master/doc/PERFORMANCE.md>`_.
* - Avg bus bandwidth
- Average of the values in the busbw column
CLI arguments
^^^^^^^^^^^^^
.. list-table::
:widths: 40 80 260
:header-rows: 1
* - Argument
- Default value
- Description
* - <cc operation>
- N/A, required argument
- The type of Collective Communication operation to execute for this benchmark.
Supported types:
- ``all_reduce`` / ``allr``: All-Reduce
- ``all_gather`` / ``allg``: All-Gather
- ``reduce_scatter`` / ``redsct``: Reduce-Scatter
- ``sendrecv``: Send-Receive
* - ``-r, --nworkers``
- N/A, required argument
- Total number of workers (ranks) to use
* - ``-N, --nnodes``
- 1
- Total number of nodes (instances) to use. The number of workers will be divided equally across all nodes.
If this argument is greater than 1, the **NEURON_RT_ROOT_COMM_ID** environment variable needs to be set to
the host address of the instance **nccom-test** is ran on, and a free port number
(for example: ``NEURON_RT_ROOT_COMM_ID=10.0.0.1:44444``). Additionally, either ``-s, --hosts`` needs to be provided
or a ``~/hosts`` file needs to exist - for more details refer to the ``-s,--hosts`` description below.
* - ``-b, --minbytes``
- 32M
- The starting size for the benchmark
* - ``-e, --maxbytes``
- 32M
- The end size for the benchmark. **nccom-test** will run benchmarks for all sizes between ``-b, --minbytes`` and
``-e, --maxbytes``, increasing the size by either ``-i, --stepbytes`` or ``--f, --stepfactor`` with every run.
* - ``-i, --stepbytes``
- (``--maxbytes`` - ``--minbytes``) / 10
- Amount of bytes with which to increase the benchmark's size on every subsequent run.
For example, for this combination of arguments: ``-b 8 -e 16 -i 4``, the benchmark will
be ran for the following sizes: 8 bytes, 12 bytes, 16 bytes.
* - ``-f, --stepfactor``
- N/A
- Factor with which to increase the benchmark's size on every subsequent run.
For example, for this combination of argument values: ``-b 8 -e 32 -f 2``, the benchmark will
be ran for the following sizes: 8 bytes, 16 bytes, 32 bytes.
* - ``-n, --iters``
- 20
- Number of Collective Communication operations to execute during the benchmark.
* - ``-w, --warmup_iters``
- 5
- Number of Collective Communication operations to execute as warmup during the benchmark
(which won't be counted towards the result).
* - ``-d, --datatype``
- ``uint8``
- Data type for the data used by the benchmark. Supported types: ``uint8``, ``int8``, ``uint16``, ``int16``,
``fp16``, ``bf16``, ``uint32``, ``int32``, ``fp32``. Input data will be zero filled, unless ``--check`` is
provided (currently, only available for ``--datatype fp32``) in which case it will be filled by a repetead
value of the requested type.
* - ``-c, --check``
- false
- If provided, the corectness of the operations will be checked. This will not impact results (time, algbw and busbw)
but will slightly increase the overall execution time.
.. note::
All arguments that take a size in bytes will also accept larger size units, for example:
``-f 2048`` can be written as ``-f 2kb`` or ``-f 1048576`` can be written as ``-f 1MB``.
Examples
^^^^^^^^
.. note::
Performance data shown in these examples should not be considered up-to-date. For the latest performance
data, please refer to the performance section.
Single Instance Examples
~~~~~~~~~~~~~~~~~~~~~~~~
- Quick environment validation
.. code-block::
nccom-test -r 2 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
33554432 33554432 uint8 768 40.69 40.69
Avg bus bandwidth: 40.6901GB/s
If a problem was found, it can be reported in two possible ways:
- Immediately:
.. code-block::
nccom-test -r 2 allr
Neuron DKMS Driver is not running! Read the troubleshooting guide at: https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-runtime/nrt-troubleshoot.html#neuron-driver-installation-fails
- After a benchmark attempt:
.. code-block::
nccom-test -r 2 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
33554432 Failure running neuron-bench - log file /tmp/nccom_test_log_7pqpdfjf.log
1 errors found - test failed
In this case, further information about the error can be found in the ``neuron-bench`` log file.
- 2 rank all-reduce on a single instance for sizes ranging from 1MiB to 1GiB with a step of 4x
.. code-block::
nccom-test -r 2 --minbytes 1kb --maxbytes 1gb --stepfactor 4 --datatype fp32 allr
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
1024 256 fp32 58 0.02 0.02
4096 1024 fp32 58 0.07 0.07
16384 4096 fp32 58 0.26 0.26
65536 16384 fp32 58 1.05 1.05
262144 65536 fp32 60 4.07 4.07
1048576 262144 fp32 68 14.36 14.36
4194304 1048576 fp32 107 36.51 36.51
16777216 4194304 fp32 332 47.06 47.06
67108864 16777216 fp32 1214 51.48 51.48
268435456 67108864 fp32 4750 52.63 52.63
1073741824 268435456 fp32 18930 52.83 52.83
Avg bus bandwidth: 23.6671GB/s
- 32 rank all-gather on a single instance for sizes ranging from 1KiB to 1MiB with a step of 8x, with correctness checking
.. code-block::
nccom-test -r 32 --minbytes 1kb --maxbytes 1mb --stepfactor 8 --datatype fp32 --check allg
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
1024 256 fp32 151 0.01 0.01
8192 2048 fp32 149 0.05 0.05
65536 16384 fp32 150 0.41 0.39
524288 131072 fp32 179 2.73 2.64
Avg bus bandwidth: 0.7731GB/s
Multiple Instances Example
~~~~~~~~~~~~~~~~~~~~~~~~~~
- 64 rank all-reduce on two instances for sizes ranging from 8 bytes to 1GiB with a step of 2x, running 50 ops
.. code-block::
NEURON_RT_ROOT_COMM_ID=10.1.4.145:45654 nccom-test -r 64 -N 2 -b 8 -e 1GB -f 2 -n 50 -w 5 -d fp32 allr --hosts 127.0.0.1 10.1.4.138
size(B) count(elems) type time(us) algbw(GB/s) busbw(GB/s)
8 2 fp32 520 0.00 0.00
16 4 fp32 520 0.00 0.00
32 8 fp32 523 0.00 0.00
64 16 fp32 525 0.00 0.00
128 32 fp32 553 0.00 0.00
256 64 fp32 709 0.00 0.00
512 128 fp32 782 0.00 0.00
1024 256 fp32 840 0.00 0.00
2048 512 fp32 881 0.00 0.00
4096 1024 fp32 916 0.00 0.01
8192 2048 fp32 1013 0.01 0.01
16384 4096 fp32 1031 0.01 0.03
32768 8192 fp32 1174 0.03 0.05
65536 16384 fp32 1315 0.05 0.09
131072 32768 fp32 1315 0.09 0.18
262144 65536 fp32 1311 0.19 0.37
524288 131072 fp32 1312 0.37 0.73
1048576 262144 fp32 1328 0.74 1.45
2097152 524288 fp32 1329 1.47 2.89
4194304 1048576 fp32 1378 2.83 5.58
8388608 2097152 fp32 1419 5.51 10.84
16777216 4194304 fp32 2138 7.31 14.39
33554432 8388608 fp32 2711 11.53 22.69
67108864 16777216 fp32 3963 15.77 31.05
134217728 33554432 fp32 6279 19.91 39.19
268435456 67108864 fp32 11954 20.91 41.17
536870912 134217728 fp32 21803 22.93 45.15
1073741824 268435456 fp32 41806 23.92 47.09
Avg bus bandwidth: 9.3924GB/s
</pre></body></html>
|
2023-09-29T20:55:00.314Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/tools/aws-neuronx-tools.rst.txt
|
```
.. _neuron-tools-rn:
Neuron System Tools
===================
.. contents:: Table of Contents
:local:
:depth: 2
Neuron Tools [2.14.6.0]
------------------------
Date: 09/15/2023
New in the release:
* Added legend in ``neuron-ls`` to clarify wrap around edges for topology view.
* Improved error messaging when passing invalid arguments to ``neuron-profile view``.
* Fixed bug in ``neuron-profile`` that incorrectly calculated buffer utilization for more recently compiled NEFFs.
* Fixed bug in ``neuron-profile`` where the profile would sometimes include additional idle time while waiting for execution to start.
* Profiler output now includes HLO name in addition to framework layer names.
* ``neuron-profile view`` now has ``--output-format json`` option which will write to a file specified by ``--output-file <name>`` (default is ``ntff.json``) instead of writing data to InfluxDB.
Neuron Tools [2.13.4.0]
------------------------
Date: 08/28/2023
New in the release:
* ``--check`` option of ``nccom-test`` now supports more data types (``fp16``, ``bf16``, ``(u)int8``, ``(u)int16``, and ``(u)int32`` are now supported in addition to ``fp32``)
* Fixed bug in ``nccom-test`` that would wait indefinitely for execution to end when running on multiple instances (``-N 2`` and higher).
* Fixed bug in ``neuron-profile`` to prevent a crash during utilization calculation
Neuron Tools [2.12.2.0]
-------------------------
Date: 7/19/2023
New in the release:
* Bumped the max supported profiling NTFF version to version 2 to resolve crashes when postprocessing NTFFs captured with newer versions of the Neuron Runtime Library.
When viewing profiles captured using Neuron Runtime Library 2.15 or above, please upgrade tools to 2.12.
This version of Neuron tools remains compatible with NTFF version 1.
* Bug fixes for ``neuron-profile`` related to the calculation of some summary stats.
Neuron Tools [2.11.10.0]
-------------------------
Date: 6/14/2023
New in the release:
* ``nccom-test`` can now show multiple latency stats in the results table, such as average or percentiles, by specifying the ``-s`` option (for example: ``-s p10 p99 avg p50``).
* First public support for ``neuron-profile`` as a standalone tool that can be used to profile executions on Neuron Devices. Visit the Neuron Tools documentation page for more details on how to use the Neuron Profiler.
Neuron Tools [2.10.1.0]
-------------------------
Date: 05/01/2023
New in the release:
* Added new Neuron Collectives benchmarking tool, ``nccom-test``, to enable benchmarking sweeps on various Neuron Collective Communication operations. See new nccom-test documentation under System Tools for more details.
* Expanded support for Neuron profiling to include runtime setup/teardown times and collapsed execution of NeuronCore engines and DMA. See Tensorboard release notes and tutorial for more details.
Neuron Tools [2.9.5.0]
-------------------------
Date: 03/28/2023
New in the release:
* Updated neuron-top to show effective FLOPs across all NeuronCores.
Neuron Tools [2.8.2.0]
-------------------------
Date: 02/24/2023
New in the release:
* Updated neuron-top to show aggregated utilization/FLOPs across all NeuronCores.
Neuron Tools [2.7.2.0]
-------------------------
Date: 02/08/2023
New in the release:
* Added support for model FLOPS metrics in both neuron-monitor and neuron-top. More details can be found in the Neuron Tools documentation.
Neuron Tools [2.6.0.0]
-------------------------
Date: 12/09/2022
This release adds support for profiling with the Neuron Plugin for TensorBoard on TRN1. Please check out the documentation :ref:`neuronx-plugin-tensorboard`.
New in the release:
* Updated profile post-processing for workloads executed on TRN1
Neuron Tools [2.5.19.0]
-------------------------
Date: 11/07/2022
New in the release:
* Minor bug fixes and improvements.
Neuron Tools [2.5.16.0]
-------------------------
Date: 10/26/2022
New in the release:
* New ``neuron-monitor`` and ``neuron-top`` feature: **memory utilization breakdown**. This new feature provides more details on how memory is being currently used on the Neuron Devices as well as on the host instance.
* ``neuron-top``'s UI layout has been updated to accommodate the new **memory utilization breakdown** feature.
* ``neuron-monitor``'s ``inference_stats`` metric group was renamed to ``execution_stats``. While the previous release still supported ``inference_stats``, starting this release the name ``inference_stats`` is considered deprecated and can't be used anymore.
.. note ::
For more details on the new **memory utilization breakdown** feature in ``neuron-monitor`` and ``neuron-top`` check out the full user guides: :ref:`neuron-monitor-ug` and :ref:`neuron-top-ug`.
Bug Fixes:
* Fix a rare crash in ``neuron-top`` when the instance is under heavy CPU load.
* Fix process names on the bottom tab bar of ``neuron-top`` sometimes disappearing for smaller terminal window sizes.
Neuron Tools [2.4.6.0]
-------------------------
Date: 10/10/2022
This release adds support for both EC2 INF1 and TRN1 platforms. Name of the package changed from aws-neuron-tools to aws-neuronx-tools. Please remove the old package before installing the new one.
New in the release:
* Added support for ECC counters on Trn1
* Added version number output to neuron-top
* Expanded support for longer process tags in neuron-monitor.
* Removed hardware counters from the default neuron-monitor config to avoid sending repeated errors - will add back in future release.
* ``neuron-ls`` - Added option ``neuron-ls --topology`` with ASCII graphics output showing the connectivity between Neuron Devices on an instance. This feature aims to help in understanding pathways between Neuron Devices and in exploiting code or data locality.
Bug Fixes:
* Fix neuron-monitor and neuron-top to show the correct Neuron Device when running in a container where not all devices are present.
Neuron Tools [2.1.4.0]
-------------------------------
Date: 04/29/2022
* Minor updates
Neuron Tools [2.0.790.0]
--------------------------------
Date: 03/25/2022
* ``neuron-monitor``: fixed a floating point error when calculating CPU utilization.
Neuron Tools [2.0.623.0]
--------------------------------
Date: 01/20/2022
New in the release:
* ``neuron-top`` - Added “all” tab that aggregates all aggregate all running Neuron processes into a single view.
* ``neuron-top`` - Improved startup time to approximately 1.5 seconds in most cases.
* ``neuron-ls`` - Removed header message about updating tools from neuron-ls output
Bug fixes:
* ``neuron-top`` - Reduced single CPU core usage down to 0.7% from 80% on inf1.xlarge when running ``neuron-top`` by switching to an event-driven
approach for screen updates.
Neuron Tools [2.0.494.0]
------------------------
Date: 12/27/2021
* Security related updates related to log4j vulnerabilities.
Neuron Tools [2.0.327.0]
------------------------
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
Neuron Tools [2.0.277.0]
------------------------
Date: 10/27/2021
New in this release:
- Tools now support applications built with Neuron Runtime 2.x (``libnrt.so``).
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
- Updates have been made to ``neuron-ls`` and ``neuron-top`` to
significantly improve the interface and utility of information
provided.
- Expands ``neuron-monitor`` to include additional information when
used to monitor latest Frameworks released with Neuron 1.16.0.
**neuron_hardware_info**
Contains basic information about the Neuron hardware.
::
"neuron_hardware_info": {
"neuron_device_count": 16,
"neuroncore_per_device_count": 4,
"error": ""
}
- ``neuron_device_count`` : number of available Neuron Devices
- ``neuroncore_per_device_count`` : number of NeuronCores present on each Neuron Device
- ``error`` : will contain an error string if any occurred when getting this information
(usually due to the Neuron Driver not being installed or not running).
- ``neuron-cli`` entering maintenance mode as it’s use is no longer
relevant when using ML Frameworks with an integrated Neuron
Runtime (libnrt.so). see :ref:`maintenance_mxnet_1_5` for more information.
- For more information visit :ref:`neuron-tools`
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-tools-rn:
Neuron System Tools
===================
.. contents:: Table of Contents
:local:
:depth: 2
Neuron Tools [2.14.6.0]
------------------------
Date: 09/15/2023
New in the release:
* Added legend in ``neuron-ls`` to clarify wrap around edges for topology view.
* Improved error messaging when passing invalid arguments to ``neuron-profile view``.
* Fixed bug in ``neuron-profile`` that incorrectly calculated buffer utilization for more recently compiled NEFFs.
* Fixed bug in ``neuron-profile`` where the profile would sometimes include additional idle time while waiting for execution to start.
* Profiler output now includes HLO name in addition to framework layer names.
* ``neuron-profile view`` now has ``--output-format json`` option which will write to a file specified by ``--output-file <name>`` (default is ``ntff.json``) instead of writing data to InfluxDB.
Neuron Tools [2.13.4.0]
------------------------
Date: 08/28/2023
New in the release:
* ``--check`` option of ``nccom-test`` now supports more data types (``fp16``, ``bf16``, ``(u)int8``, ``(u)int16``, and ``(u)int32`` are now supported in addition to ``fp32``)
* Fixed bug in ``nccom-test`` that would wait indefinitely for execution to end when running on multiple instances (``-N 2`` and higher).
* Fixed bug in ``neuron-profile`` to prevent a crash during utilization calculation
Neuron Tools [2.12.2.0]
-------------------------
Date: 7/19/2023
New in the release:
* Bumped the max supported profiling NTFF version to version 2 to resolve crashes when postprocessing NTFFs captured with newer versions of the Neuron Runtime Library.
When viewing profiles captured using Neuron Runtime Library 2.15 or above, please upgrade tools to 2.12.
This version of Neuron tools remains compatible with NTFF version 1.
* Bug fixes for ``neuron-profile`` related to the calculation of some summary stats.
Neuron Tools [2.11.10.0]
-------------------------
Date: 6/14/2023
New in the release:
* ``nccom-test`` can now show multiple latency stats in the results table, such as average or percentiles, by specifying the ``-s`` option (for example: ``-s p10 p99 avg p50``).
* First public support for ``neuron-profile`` as a standalone tool that can be used to profile executions on Neuron Devices. Visit the Neuron Tools documentation page for more details on how to use the Neuron Profiler.
Neuron Tools [2.10.1.0]
-------------------------
Date: 05/01/2023
New in the release:
* Added new Neuron Collectives benchmarking tool, ``nccom-test``, to enable benchmarking sweeps on various Neuron Collective Communication operations. See new nccom-test documentation under System Tools for more details.
* Expanded support for Neuron profiling to include runtime setup/teardown times and collapsed execution of NeuronCore engines and DMA. See Tensorboard release notes and tutorial for more details.
Neuron Tools [2.9.5.0]
-------------------------
Date: 03/28/2023
New in the release:
* Updated neuron-top to show effective FLOPs across all NeuronCores.
Neuron Tools [2.8.2.0]
-------------------------
Date: 02/24/2023
New in the release:
* Updated neuron-top to show aggregated utilization/FLOPs across all NeuronCores.
Neuron Tools [2.7.2.0]
-------------------------
Date: 02/08/2023
New in the release:
* Added support for model FLOPS metrics in both neuron-monitor and neuron-top. More details can be found in the Neuron Tools documentation.
Neuron Tools [2.6.0.0]
-------------------------
Date: 12/09/2022
This release adds support for profiling with the Neuron Plugin for TensorBoard on TRN1. Please check out the documentation :ref:`neuronx-plugin-tensorboard`.
New in the release:
* Updated profile post-processing for workloads executed on TRN1
Neuron Tools [2.5.19.0]
-------------------------
Date: 11/07/2022
New in the release:
* Minor bug fixes and improvements.
Neuron Tools [2.5.16.0]
-------------------------
Date: 10/26/2022
New in the release:
* New ``neuron-monitor`` and ``neuron-top`` feature: **memory utilization breakdown**. This new feature provides more details on how memory is being currently used on the Neuron Devices as well as on the host instance.
* ``neuron-top``'s UI layout has been updated to accommodate the new **memory utilization breakdown** feature.
* ``neuron-monitor``'s ``inference_stats`` metric group was renamed to ``execution_stats``. While the previous release still supported ``inference_stats``, starting this release the name ``inference_stats`` is considered deprecated and can't be used anymore.
.. note ::
For more details on the new **memory utilization breakdown** feature in ``neuron-monitor`` and ``neuron-top`` check out the full user guides: :ref:`neuron-monitor-ug` and :ref:`neuron-top-ug`.
Bug Fixes:
* Fix a rare crash in ``neuron-top`` when the instance is under heavy CPU load.
* Fix process names on the bottom tab bar of ``neuron-top`` sometimes disappearing for smaller terminal window sizes.
Neuron Tools [2.4.6.0]
-------------------------
Date: 10/10/2022
This release adds support for both EC2 INF1 and TRN1 platforms. Name of the package changed from aws-neuron-tools to aws-neuronx-tools. Please remove the old package before installing the new one.
New in the release:
* Added support for ECC counters on Trn1
* Added version number output to neuron-top
* Expanded support for longer process tags in neuron-monitor.
* Removed hardware counters from the default neuron-monitor config to avoid sending repeated errors - will add back in future release.
* ``neuron-ls`` - Added option ``neuron-ls --topology`` with ASCII graphics output showing the connectivity between Neuron Devices on an instance. This feature aims to help in understanding pathways between Neuron Devices and in exploiting code or data locality.
Bug Fixes:
* Fix neuron-monitor and neuron-top to show the correct Neuron Device when running in a container where not all devices are present.
Neuron Tools [2.1.4.0]
-------------------------------
Date: 04/29/2022
* Minor updates
Neuron Tools [2.0.790.0]
--------------------------------
Date: 03/25/2022
* ``neuron-monitor``: fixed a floating point error when calculating CPU utilization.
Neuron Tools [2.0.623.0]
--------------------------------
Date: 01/20/2022
New in the release:
* ``neuron-top`` - Added “all” tab that aggregates all aggregate all running Neuron processes into a single view.
* ``neuron-top`` - Improved startup time to approximately 1.5 seconds in most cases.
* ``neuron-ls`` - Removed header message about updating tools from neuron-ls output
Bug fixes:
* ``neuron-top`` - Reduced single CPU core usage down to 0.7% from 80% on inf1.xlarge when running ``neuron-top`` by switching to an event-driven
approach for screen updates.
Neuron Tools [2.0.494.0]
------------------------
Date: 12/27/2021
* Security related updates related to log4j vulnerabilities.
Neuron Tools [2.0.327.0]
------------------------
Date: 11/05/2021
* Updated Neuron Runtime (which is integrated within this package) to ``libnrt 2.2.18.0`` to fix a container issue that was preventing
the use of containers when /dev/neuron0 was not present. See details here :ref:`neuron-runtime-release-notes`.
Neuron Tools [2.0.277.0]
------------------------
Date: 10/27/2021
New in this release:
- Tools now support applications built with Neuron Runtime 2.x (``libnrt.so``).
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
- Updates have been made to ``neuron-ls`` and ``neuron-top`` to
significantly improve the interface and utility of information
provided.
- Expands ``neuron-monitor`` to include additional information when
used to monitor latest Frameworks released with Neuron 1.16.0.
**neuron_hardware_info**
Contains basic information about the Neuron hardware.
::
"neuron_hardware_info": {
"neuron_device_count": 16,
"neuroncore_per_device_count": 4,
"error": ""
}
- ``neuron_device_count`` : number of available Neuron Devices
- ``neuroncore_per_device_count`` : number of NeuronCores present on each Neuron Device
- ``error`` : will contain an error string if any occurred when getting this information
(usually due to the Neuron Driver not being installed or not running).
- ``neuron-cli`` entering maintenance mode as it’s use is no longer
relevant when using ML Frameworks with an integrated Neuron
Runtime (libnrt.so). see :ref:`maintenance_mxnet_1_5` for more information.
- For more information visit :ref:`neuron-tools`
</pre></body></html>
|
2023-09-29T20:55:00.394Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/tensorboard/index.rst.txt
|
```
TensorBoard
===========
TensorBoard for Trn1
--------------------
.. toctree::
:maxdepth: 1
Track Training Progress in TensorBoard using PyTorch Neuron </tools/tutorials/tutorial-tensorboard-scalars-mnist>
TensorBoard Plugin for Neuron (Trn1) </tools/tensorboard/getting-started-tensorboard-neuronx-plugin>
What's New </release-notes/tools/tensorboard-neuron>
TensorBoard for Inf1
--------------------
.. toctree::
:maxdepth: 1
TensorBoard Plugin for Neuron (Inf1) </tools/tensorboard/getting-started-tensorboard-neuron-plugin>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">TensorBoard
===========
TensorBoard for Trn1
--------------------
.. toctree::
:maxdepth: 1
Track Training Progress in TensorBoard using PyTorch Neuron </tools/tutorials/tutorial-tensorboard-scalars-mnist>
TensorBoard Plugin for Neuron (Trn1) </tools/tensorboard/getting-started-tensorboard-neuronx-plugin>
What's New </release-notes/tools/tensorboard-neuron>
TensorBoard for Inf1
--------------------
.. toctree::
:maxdepth: 1
TensorBoard Plugin for Neuron (Inf1) </tools/tensorboard/getting-started-tensorboard-neuron-plugin>
</pre></body></html>
|
2023-09-29T20:55:00.429Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/tutorials/tutorial-tensorboard-scalars-mnist.rst.txt
|
```
.. _tb_track_training_minst:
Track Training Progress in TensorBoard using PyTorch Neuron
============================================================
.. contents:: Table of Contents
:local:
:depth: 2
This tutorial explains how to track training progress in TensorBoard while running a multi-layer perceptron MNIST model on Trainium using PyTorch Neuron.
Multi-layer perceptron MNIST model
----------------------------------
This tutorial is based on the MNIST example for PyTorch Neuron on Trainium.
For the full tutorial, please see :ref:`Multi-Layer Perceptron Training Tutorial <neuronx-mlp-training-tutorial>`.
Output TensorBoard logs
-----------------------
To generate TensorBoard logs, we first modify the training script to use the ``SummaryWriter``:
.. code:: python
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('./output')
In the training loop, we can then use the ``add_scalar`` API to log the loss per step.
.. code:: python
writer.add_scalar("step loss", loss, idx)
At the end of the script, add ``writer.flush()`` to ensure all logs are written.
Save the following code as :download:`train_tb.py <examples/pytorch/mnist_mlp/train_tb.py>` and run it as ``python3 train_tb.py`` on a Trn1 instance.
The generated logs can be found in the ``./output`` directory that was passed to ``SummaryWriter``.
.. literalinclude:: /src/examples/pytorch/mnist_mlp/train_tb.py
:language: python
View loss in TensorBoard
------------------------
In order to view your training metrics, install TensorBoard in your Python environment:
.. code:: bash
pip install tensorboard
Then, launch TensorBoard with the ``./output`` directory
.. code:: bash
tensorboard --logdir ./output
Once running, open a new SSH connection to the instance and port-forward
TCP port 6006 (ex: -L 6006:127.0.0.1:6006). Once the tunnel is
established, TensorBoard can then be accessed via web browser at the
following URL: `http://localhost:6006 <http://localhost:6006/>`__.
Please note that you will not be able to access TensorBoard if you
disconnect your port-forwarding SSH session to the Trainium instance.
.. image:: tb-scalars.png
:alt: Image: image.png
In TensorBoard, you can now see the loss per step plotted.
When capturing loss for multiple runs, you can plot them together on the same graph to compare runs.
Be sure to change the output directory for different runs, for example ``./output/run1`` for the first, ``./output/run2`` for the second, etc.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _tb_track_training_minst:
Track Training Progress in TensorBoard using PyTorch Neuron
============================================================
.. contents:: Table of Contents
:local:
:depth: 2
This tutorial explains how to track training progress in TensorBoard while running a multi-layer perceptron MNIST model on Trainium using PyTorch Neuron.
Multi-layer perceptron MNIST model
----------------------------------
This tutorial is based on the MNIST example for PyTorch Neuron on Trainium.
For the full tutorial, please see :ref:`Multi-Layer Perceptron Training Tutorial <neuronx-mlp-training-tutorial>`.
Output TensorBoard logs
-----------------------
To generate TensorBoard logs, we first modify the training script to use the ``SummaryWriter``:
.. code:: python
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter('./output')
In the training loop, we can then use the ``add_scalar`` API to log the loss per step.
.. code:: python
writer.add_scalar("step loss", loss, idx)
At the end of the script, add ``writer.flush()`` to ensure all logs are written.
Save the following code as :download:`train_tb.py <examples/pytorch/mnist_mlp/train_tb.py>` and run it as ``python3 train_tb.py`` on a Trn1 instance.
The generated logs can be found in the ``./output`` directory that was passed to ``SummaryWriter``.
.. literalinclude:: /src/examples/pytorch/mnist_mlp/train_tb.py
:language: python
View loss in TensorBoard
------------------------
In order to view your training metrics, install TensorBoard in your Python environment:
.. code:: bash
pip install tensorboard
Then, launch TensorBoard with the ``./output`` directory
.. code:: bash
tensorboard --logdir ./output
Once running, open a new SSH connection to the instance and port-forward
TCP port 6006 (ex: -L 6006:127.0.0.1:6006). Once the tunnel is
established, TensorBoard can then be accessed via web browser at the
following URL: `http://localhost:6006 <http://localhost:6006/>`__.
Please note that you will not be able to access TensorBoard if you
disconnect your port-forwarding SSH session to the Trainium instance.
.. image:: tb-scalars.png
:alt: Image: image.png
In TensorBoard, you can now see the loss per step plotted.
When capturing loss for multiple runs, you can plot them together on the same graph to compare runs.
Be sure to change the output directory for different runs, for example ``./output/run1`` for the first, ``./output/run2`` for the second, etc.
</pre></body></html>
|
2023-09-29T20:55:00.449Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/helper-tools/index.rst.txt
|
```
Helper Tools
============
.. toctree::
:maxdepth: 1
Check Model </tools/helper-tools/tutorial-neuron-check-model>
GatherInfo </tools/helper-tools/tutorial-neuron-gatherinfo>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">Helper Tools
============
.. toctree::
:maxdepth: 1
Check Model </tools/helper-tools/tutorial-neuron-check-model>
GatherInfo </tools/helper-tools/tutorial-neuron-gatherinfo></pre></body></html>
|
2023-09-29T20:55:00.503Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuron-sys-tools/neuron-profile-user-guide.rst.txt
|
```
.. _neuron-profile-ug:
Neuron Profile User Guide
=========================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
**neuron-profile** is a tool to profile and analyze performance of a ML model compiled with the Neuron compiler
and run on Neuron devices.
.. note::
Please use the ``aws-neuronx-tools`` package from Neuron SDK 2.11 or higher.
Installation
------------
``neuron-profile`` comes as part of the ``aws-neuronx-tools`` package, and will be installed to ``/opt/aws/neuron/bin``.
The Neuron web profile viewer utilizes InfluxDB OSS 2.x to store time series data for the profiled workloads during postprocessing.
Please follow the instructions provided at https://portal.influxdata.com/downloads/ for the correct OS. A sample installation
of InfluxDB is provided below.
Ubuntu
~~~~~~
::
# Neuron
. /etc/os-release
sudo tee /etc/apt/sources.list.d/neuron.list > /dev/null <<EOF
deb https://apt.repos.neuron.amazonaws.com ${VERSION_CODENAME} main
EOF
wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add -
sudo apt-get update -y
sudo apt-get install aws-neuronx-runtime-lib aws-neuronx-dkms -y
sudo apt-get install aws-neuronx-tools -y
# InfluxDB
wget -q https://repos.influxdata.com/influxdata-archive_compat.key
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
sudo apt-get update && sudo apt-get install influxdb2 influxdb2-cli -y
sudo systemctl start influxdb
influx setup
# Fill in the information to finish the setup
AL2
~~~
::
# Neuron
sudo tee /etc/yum.repos.d/neuron.repo > /dev/null <<EOF
[neuron]
name=Neuron YUM Repository
baseurl=https://yum.repos.neuron.amazonaws.com
enabled=1
metadata_expire=0
EOF
sudo rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
sudo yum install aws-neuronx-runtime-lib aws-neuronx-dkms -y
sudo yum install aws-neuronx-tools -y
# InfluxDB
cat <<EOF | sudo tee /etc/yum.repos.d/influxdata.repo
[influxdata]
name = InfluxData Repository - Stable
baseurl = https://repos.influxdata.com/stable/\$basearch/main
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
EOF
sudo yum install influxdb2 influxdb2-cli -y
sudo systemctl start influxdb
influx setup
# Fill in the information to finish the setup
Capturing a profile
-------------------
The ``neuron-profile`` tool can both capture and post-process profiling information.
In the simplest mode, it takes a compiled model (a NEFF), executes it, and saves the profile results to a NTFF (``profile.ntff`` by default).
For this example, we assume a NEFF is already available as ``file.neff``
::
$ neuron-profile capture -n file.neff -s profile.ntff
Processing and viewing the profile results
------------------------------------------
The ``view`` subcommand of ``neuron-profile`` will handle post-processing the profiling data and starting up an HTTP server that users can
navigate to in order to see profiling results.
Viewing a single profile
~~~~~~~~~~~~~~~~~~~~~~~~
The first way to invoke ``neuron-profile view`` is to pass both the NEFF and the NTFF to this command.
It will post-process these artifacts and print out a direct link to the profile view.
::
$ neuron-profile view -n file.neff -s profile.ntff
View profile at http://0.0.0.0:3001/profile/n_fdc71a0b582ee3009711a96e59958af921243921
ctrl-c to exit
Viewing multiple profiles
~~~~~~~~~~~~~~~~~~~~~~~~~
Alternatively, when post-processing multiple profiles, it may be desirable to have a persistent server running while processing results in the background.
In this case, we can skip passing arguments to the command, which will direct users to the main page listing all available profiles.
::
$ neuron-profile view
View a list of profiles at http://0.0.0.0:3001/
In a separate window, we can kick off the post-processing without launching another server by passing the ``--ingest-only`` flag.
::
$ neuron-profile view -n file.neff -s profile.ntff --ingest-only
Profile "n_47cf9972d42798d236caa68952d0d29a76d8bd66" is ready to view
``n_47cf9972d42798d236caa68952d0d29a76d8bd66`` is the bucket where the data is stored. We can find this profile at ``localhost:3001/profile/<bucket>``.
Accessing the profiles
~~~~~~~~~~~~~~~~~~~~~~
If ``neuron-profile view`` is run on a remote instance, you may need to use port forwarding to access the profiles.
From the local machine, SSH to the remote instance and forward ports 3001 (the default ``neuron-profile`` HTTP server port) and 8086 (the default
influxdb port). Then in the browser, go to ``localhost:3001`` to view the profiles.
::
$ ssh <user>@<ip> -L 3001:localhost:3001 -L 8086:localhost:8086
Understanding a Neuron profile
------------------------------
The section provides a quick overview on what features and information are available through the Neuron web profile viewer.
For more information on terms used, please check out the :ref:`neuron_hw_glossary`.
Timeline
~~~~~~~~
|neuron-profile-web-timeline|
The execution timeline is plotted based on the elapsed nanoseconds since the start of execution.
Starting from the bottom, the ``TensorMatrix Utilization`` shows the efficiency of the TensorEngine, and
the ``Pending DMA Count`` and ``DMA Throughput`` rows show the DMA activity. In general, we want these to be as high
as possible, and in some cases may help give clues as to whether the workload is memory or compute bound.
Next are the individual NeuronCore engine executions. These rows show the start and end times for instructions executed by each
engine, and clicking on one of these bars will show more detailed information, as well as any dependencies that were found.
For models involving collective compute operations, you will additionally see rows labeled with ``CC-core``, which are used to synchronize
the CC operations.
Towards the top is the DMA activity. These can include the transfers of input and output tensors, intermediate tensors, and any
additional spilling or loading to and from the on-chip SRAM memory.
Features
~~~~~~~~
The following are some useful features that may help with navigating a profile:
- Dragging your cursor across a portion of the timeline will zoom in to the selected window, providing a more in depth view of the execution during that time period.
- Hovering over a point will reveal a subset of information associated with it.
- Clicking a point will open a text box below the timeline with all the information associated with it.
- Right-clicking a point will drop a marker at a certain location. This marker will persist when zooming in and out.
- All marker information can be found by clicking the ``Annotations`` button.
- Markers can be saved and loaded by using a provided name for the marker set.
- Individual markers can be renamed or deleted in this menu as well.
- The ``Edit view settings`` can be used to further customize the timeline view. For example, changing the ``Instruction Grouping`` dropdown option to "Layer" will re-color the timeline based on the associated framework layer name.
Additionally, there are various summary buttons that can be clicked to provide more information on the model/NEFF, such as the input and output tensors,
number of FLOPs, and the start and end of a framework layer.
|neuron-profile-web-summaries|
CLI reference
-------------
.. rubric:: neuron-profile capture
.. program:: neuron-profile
.. option:: neuron-profile capture [parameters] [inputs...]
Takes a given compiled NEFF, executes it, and collect the profile results.
When no inputs are provided, all-zero inputs are used, which may result in inf or NaNs.
It is recommended to use ``--ignore-inference
- :option:`-n,--neff` (string): the compiled NEFF to profile
- :option:`-s,--session-file` (string): the file to store profile session information in
- :option:`--ignore-exec-errors`: ignore errors during execution
- :option:`inputs` (positional args): List of inputs in the form of <NAME> <FILE_PATH> separated by space. Eg IN1 x.npy IN2 y.npy
.. option:: neuron-profile view [parameters]
- :option:`-n,--neff-path` (string): the compiled NEFF file location
- :option:`-s,--session-file` (string): the profile results NTFF file location
- :option:`--db-endpoint` (string): the endpoint of InfluxDB (default: ``http://localhost:8086``)
- :option:`--db-org` (string): the org name of InfluxDB
- :option:`--port` (int): the port number of the http server (default: 3001)
- :option:`--force`: force overwrite an existing profile in the database
Troubleshooting
---------------
InfluxDB not installed
~~~~~~~~~~~~~~~~~~~~~~
::
$ neuron-profile view -n file.neff -s profile.ntff
ERRO[0001] To install influxdb, go to https://portal.influxdata.com/downloads/ and follow the instructions there
influxdb not setup correctly: exec: "influx": executable file not found in $PATH
::
$ neuron-profile view -n file.neff -s profile.ntff
ERRO[0000]
influxdb token not setup correctly: exit status 1
Try executing "systemctl start influxdb" and "influx setup"
Running ``neuron-profile view`` without InfluxDB installed will result in an error and a pointer to the InfluxDB installation instructions.
Please follow the provided instructions and retry.
Too many open files
~~~~~~~~~~~~~~~~~~~
::
influxdb2client E! Write error: internal error: unexpected error writing points to database: [shard 10677] open /home/ubuntu/.influxdbv2/engine/data/7caae65aaa48380d/autogen/10677/index/0/MANIFEST: too many open files
InfluxDB will encounter "too many open files" and out of memory errors after a few hundred buckets have been created.
Two ways to solve this are to delete unused buckets or increase the system file descriptor limit.
To increase the file descriptor limit, add the following lines to ``/etc/security/limits.d/efa.conf`` and ``/etc/security/limits.conf``:
::
* soft nofile 1048576
* hard nofile 1048576
Add the following lines to /etc/sysctl.conf
::
fs.file-max = 197341270
vm.max_map_count=1048576
Commit changes by running ``sudo sysctl -p``.
.. |neuron-profile-web-timeline| image:: /images/neuron-profile-web-timeline_2-11.png
.. |neuron-profile-web-summaries| image:: /images/neuron-profile-web-summaries_2-11.png
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-profile-ug:
Neuron Profile User Guide
=========================
.. contents:: Table of contents
:local:
:depth: 2
Overview
--------
**neuron-profile** is a tool to profile and analyze performance of a ML model compiled with the Neuron compiler
and run on Neuron devices.
.. note::
Please use the ``aws-neuronx-tools`` package from Neuron SDK 2.11 or higher.
Installation
------------
``neuron-profile`` comes as part of the ``aws-neuronx-tools`` package, and will be installed to ``/opt/aws/neuron/bin``.
The Neuron web profile viewer utilizes InfluxDB OSS 2.x to store time series data for the profiled workloads during postprocessing.
Please follow the instructions provided at https://portal.influxdata.com/downloads/ for the correct OS. A sample installation
of InfluxDB is provided below.
Ubuntu
~~~~~~
::
# Neuron
. /etc/os-release
sudo tee /etc/apt/sources.list.d/neuron.list > /dev/null <<EOF
deb https://apt.repos.neuron.amazonaws.com ${VERSION_CODENAME} main
EOF
wget -qO - https://apt.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB | sudo apt-key add -
sudo apt-get update -y
sudo apt-get install aws-neuronx-runtime-lib aws-neuronx-dkms -y
sudo apt-get install aws-neuronx-tools -y
# InfluxDB
wget -q https://repos.influxdata.com/influxdata-archive_compat.key
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list
sudo apt-get update && sudo apt-get install influxdb2 influxdb2-cli -y
sudo systemctl start influxdb
influx setup
# Fill in the information to finish the setup
AL2
~~~
::
# Neuron
sudo tee /etc/yum.repos.d/neuron.repo > /dev/null <<EOF
[neuron]
name=Neuron YUM Repository
baseurl=https://yum.repos.neuron.amazonaws.com
enabled=1
metadata_expire=0
EOF
sudo rpm --import https://yum.repos.neuron.amazonaws.com/GPG-PUB-KEY-AMAZON-AWS-NEURON.PUB
sudo yum install aws-neuronx-runtime-lib aws-neuronx-dkms -y
sudo yum install aws-neuronx-tools -y
# InfluxDB
cat <<EOF | sudo tee /etc/yum.repos.d/influxdata.repo
[influxdata]
name = InfluxData Repository - Stable
baseurl = https://repos.influxdata.com/stable/\$basearch/main
enabled = 1
gpgcheck = 1
gpgkey = https://repos.influxdata.com/influxdata-archive_compat.key
EOF
sudo yum install influxdb2 influxdb2-cli -y
sudo systemctl start influxdb
influx setup
# Fill in the information to finish the setup
Capturing a profile
-------------------
The ``neuron-profile`` tool can both capture and post-process profiling information.
In the simplest mode, it takes a compiled model (a NEFF), executes it, and saves the profile results to a NTFF (``profile.ntff`` by default).
For this example, we assume a NEFF is already available as ``file.neff``
::
$ neuron-profile capture -n file.neff -s profile.ntff
Processing and viewing the profile results
------------------------------------------
The ``view`` subcommand of ``neuron-profile`` will handle post-processing the profiling data and starting up an HTTP server that users can
navigate to in order to see profiling results.
Viewing a single profile
~~~~~~~~~~~~~~~~~~~~~~~~
The first way to invoke ``neuron-profile view`` is to pass both the NEFF and the NTFF to this command.
It will post-process these artifacts and print out a direct link to the profile view.
::
$ neuron-profile view -n file.neff -s profile.ntff
View profile at http://0.0.0.0:3001/profile/n_fdc71a0b582ee3009711a96e59958af921243921
ctrl-c to exit
Viewing multiple profiles
~~~~~~~~~~~~~~~~~~~~~~~~~
Alternatively, when post-processing multiple profiles, it may be desirable to have a persistent server running while processing results in the background.
In this case, we can skip passing arguments to the command, which will direct users to the main page listing all available profiles.
::
$ neuron-profile view
View a list of profiles at http://0.0.0.0:3001/
In a separate window, we can kick off the post-processing without launching another server by passing the ``--ingest-only`` flag.
::
$ neuron-profile view -n file.neff -s profile.ntff --ingest-only
Profile "n_47cf9972d42798d236caa68952d0d29a76d8bd66" is ready to view
``n_47cf9972d42798d236caa68952d0d29a76d8bd66`` is the bucket where the data is stored. We can find this profile at ``localhost:3001/profile/<bucket>``.
Accessing the profiles
~~~~~~~~~~~~~~~~~~~~~~
If ``neuron-profile view`` is run on a remote instance, you may need to use port forwarding to access the profiles.
From the local machine, SSH to the remote instance and forward ports 3001 (the default ``neuron-profile`` HTTP server port) and 8086 (the default
influxdb port). Then in the browser, go to ``localhost:3001`` to view the profiles.
::
$ ssh <user>@<ip> -L 3001:localhost:3001 -L 8086:localhost:8086
Understanding a Neuron profile
------------------------------
The section provides a quick overview on what features and information are available through the Neuron web profile viewer.
For more information on terms used, please check out the :ref:`neuron_hw_glossary`.
Timeline
~~~~~~~~
|neuron-profile-web-timeline|
The execution timeline is plotted based on the elapsed nanoseconds since the start of execution.
Starting from the bottom, the ``TensorMatrix Utilization`` shows the efficiency of the TensorEngine, and
the ``Pending DMA Count`` and ``DMA Throughput`` rows show the DMA activity. In general, we want these to be as high
as possible, and in some cases may help give clues as to whether the workload is memory or compute bound.
Next are the individual NeuronCore engine executions. These rows show the start and end times for instructions executed by each
engine, and clicking on one of these bars will show more detailed information, as well as any dependencies that were found.
For models involving collective compute operations, you will additionally see rows labeled with ``CC-core``, which are used to synchronize
the CC operations.
Towards the top is the DMA activity. These can include the transfers of input and output tensors, intermediate tensors, and any
additional spilling or loading to and from the on-chip SRAM memory.
Features
~~~~~~~~
The following are some useful features that may help with navigating a profile:
- Dragging your cursor across a portion of the timeline will zoom in to the selected window, providing a more in depth view of the execution during that time period.
- Hovering over a point will reveal a subset of information associated with it.
- Clicking a point will open a text box below the timeline with all the information associated with it.
- Right-clicking a point will drop a marker at a certain location. This marker will persist when zooming in and out.
- All marker information can be found by clicking the ``Annotations`` button.
- Markers can be saved and loaded by using a provided name for the marker set.
- Individual markers can be renamed or deleted in this menu as well.
- The ``Edit view settings`` can be used to further customize the timeline view. For example, changing the ``Instruction Grouping`` dropdown option to "Layer" will re-color the timeline based on the associated framework layer name.
Additionally, there are various summary buttons that can be clicked to provide more information on the model/NEFF, such as the input and output tensors,
number of FLOPs, and the start and end of a framework layer.
|neuron-profile-web-summaries|
CLI reference
-------------
.. rubric:: neuron-profile capture
.. program:: neuron-profile
.. option:: neuron-profile capture [parameters] [inputs...]
Takes a given compiled NEFF, executes it, and collect the profile results.
When no inputs are provided, all-zero inputs are used, which may result in inf or NaNs.
It is recommended to use ``--ignore-inference
- :option:`-n,--neff` (string): the compiled NEFF to profile
- :option:`-s,--session-file` (string): the file to store profile session information in
- :option:`--ignore-exec-errors`: ignore errors during execution
- :option:`inputs` (positional args): List of inputs in the form of <NAME> <FILE_PATH> separated by space. Eg IN1 x.npy IN2 y.npy
.. option:: neuron-profile view [parameters]
- :option:`-n,--neff-path` (string): the compiled NEFF file location
- :option:`-s,--session-file` (string): the profile results NTFF file location
- :option:`--db-endpoint` (string): the endpoint of InfluxDB (default: ``http://localhost:8086``)
- :option:`--db-org` (string): the org name of InfluxDB
- :option:`--port` (int): the port number of the http server (default: 3001)
- :option:`--force`: force overwrite an existing profile in the database
Troubleshooting
---------------
InfluxDB not installed
~~~~~~~~~~~~~~~~~~~~~~
::
$ neuron-profile view -n file.neff -s profile.ntff
ERRO[0001] To install influxdb, go to https://portal.influxdata.com/downloads/ and follow the instructions there
influxdb not setup correctly: exec: "influx": executable file not found in $PATH
::
$ neuron-profile view -n file.neff -s profile.ntff
ERRO[0000]
influxdb token not setup correctly: exit status 1
Try executing "systemctl start influxdb" and "influx setup"
Running ``neuron-profile view`` without InfluxDB installed will result in an error and a pointer to the InfluxDB installation instructions.
Please follow the provided instructions and retry.
Too many open files
~~~~~~~~~~~~~~~~~~~
::
influxdb2client E! Write error: internal error: unexpected error writing points to database: [shard 10677] open /home/ubuntu/.influxdbv2/engine/data/7caae65aaa48380d/autogen/10677/index/0/MANIFEST: too many open files
InfluxDB will encounter "too many open files" and out of memory errors after a few hundred buckets have been created.
Two ways to solve this are to delete unused buckets or increase the system file descriptor limit.
To increase the file descriptor limit, add the following lines to ``/etc/security/limits.d/efa.conf`` and ``/etc/security/limits.conf``:
::
* soft nofile 1048576
* hard nofile 1048576
Add the following lines to /etc/sysctl.conf
::
fs.file-max = 197341270
vm.max_map_count=1048576
Commit changes by running ``sudo sysctl -p``.
.. |neuron-profile-web-timeline| image:: /images/neuron-profile-web-timeline_2-11.png
.. |neuron-profile-web-summaries| image:: /images/neuron-profile-web-summaries_2-11.png
</pre></body></html>
|
2023-09-29T20:55:00.518Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/index.rst.txt
|
```
.. _neuronperf:
=================
NeuronPerf (Beta)
=================
NeuronPerf is a lightweight Python library with a simple API that enables fast measurements of performance when running models using Neuron.
.. _neuronperf_quickstart:
NeuronPerf Quickstart
---------------------
To install NeuronPerf in your Neuron environment, execute:
.. code:: bash
$ pip install neuronperf --extra-index-url=https://pip.repos.neuron.amazonaws.com
Refer to the :ref:`neuronperf_examples` and :ref:`neuronperf_user_guide` to get started.
.. _neuronperf_user_guide:
NeuronPerf User Guide
---------------------
.. toctree::
:maxdepth: 1
Overview <neuronperf_overview>
Terminology <neuronperf_terminology>
Examples <neuronperf_examples>
Benchmark Guide <neuronperf_benchmark_guide>
Evaluate Guide <neuronperf_evaluate_guide>
Compile Guide <neuronperf_compile_guide>
Model Index Guide <neuronperf_model_index_guide>
NeuronPerf API Reference
------------------------
.. toctree::
:maxdepth: 1
API <neuronperf_api>
Framework Notes <neuronperf_framework_notes>
FAQ
---
.. toctree::
:maxdepth: 1
FAQ <neuronperf_faq>
Troubleshooting
---------------
.. toctree::
:maxdepth: 1
Troubleshooting <neuronperf_troubleshooting>
Release Notes
-------------
.. toctree::
:maxdepth: 1
rn
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf:
=================
NeuronPerf (Beta)
=================
NeuronPerf is a lightweight Python library with a simple API that enables fast measurements of performance when running models using Neuron.
.. _neuronperf_quickstart:
NeuronPerf Quickstart
---------------------
To install NeuronPerf in your Neuron environment, execute:
.. code:: bash
$ pip install neuronperf --extra-index-url=https://pip.repos.neuron.amazonaws.com
Refer to the :ref:`neuronperf_examples` and :ref:`neuronperf_user_guide` to get started.
.. _neuronperf_user_guide:
NeuronPerf User Guide
---------------------
.. toctree::
:maxdepth: 1
Overview <neuronperf_overview>
Terminology <neuronperf_terminology>
Examples <neuronperf_examples>
Benchmark Guide <neuronperf_benchmark_guide>
Evaluate Guide <neuronperf_evaluate_guide>
Compile Guide <neuronperf_compile_guide>
Model Index Guide <neuronperf_model_index_guide>
NeuronPerf API Reference
------------------------
.. toctree::
:maxdepth: 1
API <neuronperf_api>
Framework Notes <neuronperf_framework_notes>
FAQ
---
.. toctree::
:maxdepth: 1
FAQ <neuronperf_faq>
Troubleshooting
---------------
.. toctree::
:maxdepth: 1
Troubleshooting <neuronperf_troubleshooting>
Release Notes
-------------
.. toctree::
:maxdepth: 1
rn
</pre></body></html>
|
2023-09-29T20:55:00.606Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/tensorboard/getting-started-tensorboard-neuronx-plugin.rst.txt
|
```
.. _neuronx-plugin-tensorboard:
Neuron Plugin for TensorBoard (Trn1)
====================================
.. contents:: Table of Contents
:local:
:depth: 2
Overview
--------
This guide is for developers who want to better understand how their
model is executed using Neuron SDK through TensorBoard.
The Neuron plugin for TensorBoard provides metrics to the performance of machine learning tasks accelerated using the Neuron SDK. It is
compatible with TensorBoard versions 1.15 and higher. It provides visualizations and profiling results for graphs executed on NeuronCores.
.. note::
The following information is compatible with Neuron SDK for Trn1. For a walkthrough on Inf1, please check out the guide
:ref:`neuron-plugin-tensorboard`.
Enable profiling on Trn1
------------------------
.. note::
Profiling is currently only supported with PyTorch Neuron (``torch-neuronx``).
Please refer to the following guides:
- PyTorch-Neuron
- :ref:`torch-neuronx-profiling-with-tb`
Launch TensorBoard
------------------
In this step, we will process the Neuron profile data and launch TensorBoard.
1. Install the Neuron plugin for Tensorboard on your EC2 instance.
.. code:: bash
python -m pip config set global.extra-index-url "https://pip.repos.neuron.amazonaws.com"
pip install tensorboard-plugin-neuronx
.. note::
If using TensorBoard >= 2.5, please use the ``--load_fast=false`` option when launching.
``tensorboard --logdir results --load_fast=false``
2. After you see the following message, TensorBoard is ready to use. By default,
TensorBoard will be launched at ``localhost:6006``.
::
...
Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
TensorBoard 2.4.1 at http://localhost:6006/ (Press CTRL+C to quit)
View results in TensorBoard
---------------------------
In this step, we will view the Neuron plugin for TensorBoard from a browser on your local
development machine.
1. Connect to the EC2 instance where TensorBoard is running while enabling port forwarding.
In this example, we assume TensorBoard has been launched using the default address ``localhost:6006``.
.. code:: bash
# if Ubuntu-based AMI
ssh -i <PEM key file> ubuntu@<instance DNS> -L 6006:localhost:6006
# if AL2-based AMI
ssh -i <PEM key file> ec2-user@<instance DNS> -L 6006:localhost:6006
2. In a browser, visit |tensorboard_address|.
3. In the top navigation bar, switch from ``Graphs`` to ``Neuron``. If it does not show up,
please wait a while and refresh the page while the plugin loads. If the issue persists, check
the ``Inactive`` dropdown list on the right and check for ``Neuron``.
|image1|
4. If TensorBoard failed to find the generated logs, you will see the following message:
|image2|
In this case, please make sure the version of the ``aws-neuronx-tools``
package and the Neuron framework package is from Neuron release 2.6 or newer.
Neuron Trace View
-----------------
|image3|
The trace view gives a high level timeline of execution by aligning Neuron events, such as Neuron Device execution,
data transfers, and Collective Compute synchronization (if applicable), with other events from the XLA profiler.
Use this view to better understand bottlenecks during the run, and potentially experiment with how execution changes
by moving the ``mark_step()`` call which will execute the graph.
Neuron Operator View
--------------------
|image4|
The operator view can show timing information for both the framework operators and HLO operators by selecting
the ``operator-framework`` and ``operator-hlo`` tools respectively. The pie charts show breakdowns of the time taken
by device, as well as per operator on a single device. The table below lists out the operators and can be sorted by clicking
on the columnn headers. For fused operations, hover over the ``?`` to see which operators are being executed.
For a quick glance at the most time consuming operators, click the ``Time %`` column in the table to sort by the relative
time spent on this type of operation compared to the rest of the model.
Neuron Operator Timeline View
-----------------------------
|image5|
The operator timeline view is a detailed look into a single execution with Neuron. A high level overview at the top breaks
down the execution into categories, including Neuron Runtime setup time, as well as NeuronCore compute engine and DMA engine busyness.
Activity on the compute and DMA engines are further categorized as compute, control, and data transfer intervals which are
shown as separate processes, with each showing a hierarchical view of the framework operators and their corresponding
HLO operation. The fused operations can be a result of compiler optimizations or are operations that are running in
parallel on the device. Each bar can be clicked to show information regarding which operators are overlapped.
This view can give better insight into how operators translate to Neuron, as well as how certain Neuron compiler options
may improve performance.
Troubleshooting
---------------
TensorBoard launch fails
~~~~~~~~~~~~~~~~~~~~~~~~
::
ImportError: cannot import name 'Mapping' from 'collections'
This is an issue with Python 3.10 and a dependency of an old tensorboard version. To workaround this error, please run
``pip install --upgrade tensorboard``. For more information, see https://github.com/tensorflow/tensorboard/pull/5490.
.. |image1| image:: /images/Neuron_Profiler_Tensorboard_Dropdown.jpg
.. |image2| image:: /images/tb-plugin-img12.png
:height: 2914
:width: 5344
:scale: 10%
.. |image3| image:: /images/Neuron_Profiler_Runtime_Trace_Original.jpg
.. |image4| image:: /images/Neuron_Profiler_T1_Op_Framework_View.png
.. |image5| image:: /images/TB_Operator_Timeline_2-10.png
.. |tensorboard_address| raw:: html
<a href="http://localhost:6006" target="_blank">localhost:6006</a>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronx-plugin-tensorboard:
Neuron Plugin for TensorBoard (Trn1)
====================================
.. contents:: Table of Contents
:local:
:depth: 2
Overview
--------
This guide is for developers who want to better understand how their
model is executed using Neuron SDK through TensorBoard.
The Neuron plugin for TensorBoard provides metrics to the performance of machine learning tasks accelerated using the Neuron SDK. It is
compatible with TensorBoard versions 1.15 and higher. It provides visualizations and profiling results for graphs executed on NeuronCores.
.. note::
The following information is compatible with Neuron SDK for Trn1. For a walkthrough on Inf1, please check out the guide
:ref:`neuron-plugin-tensorboard`.
Enable profiling on Trn1
------------------------
.. note::
Profiling is currently only supported with PyTorch Neuron (``torch-neuronx``).
Please refer to the following guides:
- PyTorch-Neuron
- :ref:`torch-neuronx-profiling-with-tb`
Launch TensorBoard
------------------
In this step, we will process the Neuron profile data and launch TensorBoard.
1. Install the Neuron plugin for Tensorboard on your EC2 instance.
.. code:: bash
python -m pip config set global.extra-index-url "https://pip.repos.neuron.amazonaws.com"
pip install tensorboard-plugin-neuronx
.. note::
If using TensorBoard >= 2.5, please use the ``--load_fast=false`` option when launching.
``tensorboard --logdir results --load_fast=false``
2. After you see the following message, TensorBoard is ready to use. By default,
TensorBoard will be launched at ``localhost:6006``.
::
...
Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
TensorBoard 2.4.1 at http://localhost:6006/ (Press CTRL+C to quit)
View results in TensorBoard
---------------------------
In this step, we will view the Neuron plugin for TensorBoard from a browser on your local
development machine.
1. Connect to the EC2 instance where TensorBoard is running while enabling port forwarding.
In this example, we assume TensorBoard has been launched using the default address ``localhost:6006``.
.. code:: bash
# if Ubuntu-based AMI
ssh -i <PEM key file> ubuntu@<instance DNS> -L 6006:localhost:6006
# if AL2-based AMI
ssh -i <PEM key file> ec2-user@<instance DNS> -L 6006:localhost:6006
2. In a browser, visit |tensorboard_address|.
3. In the top navigation bar, switch from ``Graphs`` to ``Neuron``. If it does not show up,
please wait a while and refresh the page while the plugin loads. If the issue persists, check
the ``Inactive`` dropdown list on the right and check for ``Neuron``.
|image1|
4. If TensorBoard failed to find the generated logs, you will see the following message:
|image2|
In this case, please make sure the version of the ``aws-neuronx-tools``
package and the Neuron framework package is from Neuron release 2.6 or newer.
Neuron Trace View
-----------------
|image3|
The trace view gives a high level timeline of execution by aligning Neuron events, such as Neuron Device execution,
data transfers, and Collective Compute synchronization (if applicable), with other events from the XLA profiler.
Use this view to better understand bottlenecks during the run, and potentially experiment with how execution changes
by moving the ``mark_step()`` call which will execute the graph.
Neuron Operator View
--------------------
|image4|
The operator view can show timing information for both the framework operators and HLO operators by selecting
the ``operator-framework`` and ``operator-hlo`` tools respectively. The pie charts show breakdowns of the time taken
by device, as well as per operator on a single device. The table below lists out the operators and can be sorted by clicking
on the columnn headers. For fused operations, hover over the ``?`` to see which operators are being executed.
For a quick glance at the most time consuming operators, click the ``Time %`` column in the table to sort by the relative
time spent on this type of operation compared to the rest of the model.
Neuron Operator Timeline View
-----------------------------
|image5|
The operator timeline view is a detailed look into a single execution with Neuron. A high level overview at the top breaks
down the execution into categories, including Neuron Runtime setup time, as well as NeuronCore compute engine and DMA engine busyness.
Activity on the compute and DMA engines are further categorized as compute, control, and data transfer intervals which are
shown as separate processes, with each showing a hierarchical view of the framework operators and their corresponding
HLO operation. The fused operations can be a result of compiler optimizations or are operations that are running in
parallel on the device. Each bar can be clicked to show information regarding which operators are overlapped.
This view can give better insight into how operators translate to Neuron, as well as how certain Neuron compiler options
may improve performance.
Troubleshooting
---------------
TensorBoard launch fails
~~~~~~~~~~~~~~~~~~~~~~~~
::
ImportError: cannot import name 'Mapping' from 'collections'
This is an issue with Python 3.10 and a dependency of an old tensorboard version. To workaround this error, please run
``pip install --upgrade tensorboard``. For more information, see https://github.com/tensorflow/tensorboard/pull/5490.
.. |image1| image:: /images/Neuron_Profiler_Tensorboard_Dropdown.jpg
.. |image2| image:: /images/tb-plugin-img12.png
:height: 2914
:width: 5344
:scale: 10%
.. |image3| image:: /images/Neuron_Profiler_Runtime_Trace_Original.jpg
.. |image4| image:: /images/Neuron_Profiler_T1_Op_Framework_View.png
.. |image5| image:: /images/TB_Operator_Timeline_2-10.png
.. |tensorboard_address| raw:: html
<a href="http://localhost:6006" target="_blank">localhost:6006</a>
</pre></body></html>
|
2023-09-29T20:55:00.675Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/helper-tools/tutorial-neuron-check-model.rst.txt
|
```
.. _neuron_check_model:
Neuron Check Model
^^^^^^^^^^^^^^^^^^
Overview
========
Neuron Check Model tool provides user with basic information about the compiled and uncompiled model's operations
without the use of TensorBoard-Neuron. For additional visibility into the models, please see :ref:`neuron-plugin-tensorboard`.
Neuron Check Model tool scans the user's uncompiled model and provides a table of the operations within the uncompiled
model. By default, the table shows each operation type and number of instances of that type within model, and whether
the type is supported in Neuron. If --show_names option is specified, the table shows each operation by name and
whether the type of that operation is supported in Neuron.
If the model is already compiled, the tool also provides the table of operations as for uncompiled model. The table
include the Neuron subgraph type and number of instances of that type, along with operations that have not been
compiled to Neuron. Additionally, the tool displays a message showing the minimum number of NeuronCores required to run the
model, followed by another table which shows the list of Neuron subgraphs by name and the number of pipelined
NeuronCores used by each subgraph. More information about NeuronCore pipeline can be found in
:ref:`neuroncore-pipeline`. If --expand_subgraph option is specified, the operations within each subgraph are
printed below the subgraph information.
Neuron Check Model tool is currently available for TensorFlow and MXNet. To check PT model, please use
torch.neuron.analyze_model function as shown in PyTorch-Neuron Getting Started tutorial :ref:`/src/examples/pytorch/resnet50.ipynb`
TensorFlow-Neuron Check Model
=============================
The following example shows how to run TensorFlow-Neuron Check Model tool with TensorFlow ResNet50 tutorial.
1. Start with the TensorFlow ResNet50 tutorial at :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb` and do the first three steps of the
tutorial. Please stay in the Python environment that you setup during the tutorial.
2. Install needed tensorflow_hub package and download the tool:
::
pip install tensorflow_hub
wget https://raw.githubusercontent.com/aws/aws-neuron-sdk/master/src/neuron-gatherinfo/tf_neuron_check_model.py
python tf_neuron_check_model.py -h
::
usage: tf_neuron_check_model.py [-h] [--show_names] [--expand_subgraph]
model_path
positional arguments:
model_path a TensorFlow SavedModel directory (currently supporting
TensorFlow v1 SaveModel only).
optional arguments:
-h, --help show this help message and exit
--show_names list operation by name instead of summarizing by type
(caution: this option will generate many lines of output
for a large model).
--expand_subgraph show subgraph operations.
3. After step 3 of the TensorFlow ResNet50 tutorial, you can check the uncompiled model to see Neuron supported operations (currently supporting TensorFlow v1 SaveModel only):
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50/
* The following table shows the supported and unsupported operations within this uncompiled model.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['Placeholder', 'PlaceholderWithDefault', 'NoOp', 'Const', 'Identity', 'IdentityN', 'VarHandleOp',
'VarIsInitializedOp', 'AssignVariableOp', 'ReadVariableOp', 'StringJoin', 'ShardedFilename', 'SaveV2',
'MergeV2Checkpoints', 'RestoreV2']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
Pad 2 Yes
RandomUniform 54 Yes
Sub 54 Yes
Mul 54 Yes
Add 54 Yes
Conv2D 53 Yes
BiasAdd 54 Yes
FusedBatchNormV3 53 Yes
Relu 49 Yes
MaxPool 1 Yes
AddV2 16 Yes
Fill 56 Yes
Mean 1 Yes
MatMul 1 Yes
Softmax 1 Yes
Pack 1 Yes
* Total inference operations: 504
* Total Neuron supported inference operations: 504
* Percent of total inference operations supported by Neuron: 100.0
4. You can also check the compiled model to see the number of pipeline NeuronCores for each subgraph:
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50_neuron/
* Found 1 Neuron subgraph(s) (NeuronOp(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['Placeholder', 'PlaceholderWithDefault', 'NoOp', 'Const', 'Identity', 'IdentityN', 'VarHandleOp',
'VarIsInitializedOp', 'AssignVariableOp', 'ReadVariableOp', 'StringJoin', 'ShardedFilename', 'SaveV2',
'MergeV2Checkpoints', 'RestoreV2']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
NeuronOp 1 Yes
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 1
5. When showing subgraph information, you can use --expand_subgraph to show operation types in each subgraph:
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50_neuron/ --expand_subgraph
(output truncated to show subgraph information only)
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 1
Op Type Num Instances
------- -------------
MatMul 1
Relu 49
Add 16
FusedBatchNorm 53
BiasAdd 54
Conv2D 53
Pad 2
Mean 1
MaxPool 1
Softmax 1
6. Use --show_names to see full operation names (caution: this option will generate many lines of output for a large model):
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50_neuron/ --show_names
* Found 1 Neuron subgraph(s) (NeuronOp(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation name and whether the type of that operation is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['Placeholder', 'PlaceholderWithDefault', 'NoOp', 'Const', 'Identity', 'IdentityN', 'VarHandleOp',
'VarIsInitializedOp', 'AssignVariableOp', 'ReadVariableOp', 'StringJoin', 'ShardedFilename', 'SaveV2',
'MergeV2Checkpoints', 'RestoreV2']
Op Name Op Type Neuron Supported ?
------- ------- ------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 NeuronOp Yes
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 1
MXNet-Neuron Check Model
=======================
The following example shows how to run MXNet-Neuron Check Model tool with MXNet ResNet50 tutorial.
1. Start with the MXNet ResNet50 tutorial at :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb` and do the first three steps of the tutorial.
Please stay in the Python environment that you setup during the tutorial.
2. Download the tool:
::
wget https://raw.githubusercontent.com/aws/aws-neuron-sdk/master/src/neuron-gatherinfo/mx_neuron_check_model.py
python mx_neuron_check_model.py -h
::
usage: mx_neuron_check_model.py [-h] [--show_names] [--expand_subgraph]
model_path
positional arguments:
model_path path prefix to MXNet model (the part before -symbol.json)
optional arguments:
-h, --help show this help message and exit
--show_names list operation by name instead of summarizing by type
(caution: this option will generate many lines of output
for a large model).
--expand_subgraph show subgraph operations.
3. After step 3 of MXNet ResNet50 tutorial, you can check the uncompiled model to see Neuron supported operations:
::
$ python mx_neuron_check_model.py resnet-50
* The following table shows the supported and unsupported operations within this uncompiled model.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['null']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
BatchNorm 51 Yes
Convolution 53 Yes
Activation 50 Yes
Pooling 2 Yes
elemwise_add 16 Yes
Flatten 1 Yes
FullyConnected 1 Yes
SoftmaxOutput 1 No
* Total inference operations: 175
* Total Neuron supported inference operations: 174
* Percent of total inference operations supported by Neuron: 99.4
4. You can also check the compiled model to see the number of pipeline NeuronCores for each subgraph:
::
$ python mx_neuron_check_model.py resnet-50_compiled
* Found 1 Neuron subgraph(s) (_neuron_subgraph_op(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['null']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
_neuron_subgraph_op 1 Yes
SoftmaxOutput 1 No
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
_neuron_subgraph_op0 1
5. When showing subgraph information, you can use --expand_subgraph to show operation types in each subgraph:
::
$ python mx_neuron_check_model.py resnet-50_compiled --expand_subgraph
(output truncated to show subgraph information only)
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
_neuron_subgraph_op0 1
Op Type Num Instances
------- -------------
BatchNorm 51
Convolution 53
Activation 50
Pooling 2
elemwise_add 16
Flatten 1
FullyConnected 1
6. Use --show_names to see full operation names (caution: this option will generate many lines of output for a large model):
::
$ python mx_neuron_check_model.py resnet-50_compiled --show_names
* Found 1 Neuron subgraph(s) (_neuron_subgraph_op(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation name and whether the type of that operation is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['null']
Op Name Op Type Neuron Supported ?
------- ------- ------------------
_neuron_subgraph_op0 _neuron_subgraph_op Yes
softmax SoftmaxOutput No
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
_neuron_subgraph_op0 1
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_check_model:
Neuron Check Model
^^^^^^^^^^^^^^^^^^
Overview
========
Neuron Check Model tool provides user with basic information about the compiled and uncompiled model's operations
without the use of TensorBoard-Neuron. For additional visibility into the models, please see :ref:`neuron-plugin-tensorboard`.
Neuron Check Model tool scans the user's uncompiled model and provides a table of the operations within the uncompiled
model. By default, the table shows each operation type and number of instances of that type within model, and whether
the type is supported in Neuron. If --show_names option is specified, the table shows each operation by name and
whether the type of that operation is supported in Neuron.
If the model is already compiled, the tool also provides the table of operations as for uncompiled model. The table
include the Neuron subgraph type and number of instances of that type, along with operations that have not been
compiled to Neuron. Additionally, the tool displays a message showing the minimum number of NeuronCores required to run the
model, followed by another table which shows the list of Neuron subgraphs by name and the number of pipelined
NeuronCores used by each subgraph. More information about NeuronCore pipeline can be found in
:ref:`neuroncore-pipeline`. If --expand_subgraph option is specified, the operations within each subgraph are
printed below the subgraph information.
Neuron Check Model tool is currently available for TensorFlow and MXNet. To check PT model, please use
torch.neuron.analyze_model function as shown in PyTorch-Neuron Getting Started tutorial :ref:`/src/examples/pytorch/resnet50.ipynb`
TensorFlow-Neuron Check Model
=============================
The following example shows how to run TensorFlow-Neuron Check Model tool with TensorFlow ResNet50 tutorial.
1. Start with the TensorFlow ResNet50 tutorial at :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb` and do the first three steps of the
tutorial. Please stay in the Python environment that you setup during the tutorial.
2. Install needed tensorflow_hub package and download the tool:
::
pip install tensorflow_hub
wget https://raw.githubusercontent.com/aws/aws-neuron-sdk/master/src/neuron-gatherinfo/tf_neuron_check_model.py
python tf_neuron_check_model.py -h
::
usage: tf_neuron_check_model.py [-h] [--show_names] [--expand_subgraph]
model_path
positional arguments:
model_path a TensorFlow SavedModel directory (currently supporting
TensorFlow v1 SaveModel only).
optional arguments:
-h, --help show this help message and exit
--show_names list operation by name instead of summarizing by type
(caution: this option will generate many lines of output
for a large model).
--expand_subgraph show subgraph operations.
3. After step 3 of the TensorFlow ResNet50 tutorial, you can check the uncompiled model to see Neuron supported operations (currently supporting TensorFlow v1 SaveModel only):
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50/
* The following table shows the supported and unsupported operations within this uncompiled model.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['Placeholder', 'PlaceholderWithDefault', 'NoOp', 'Const', 'Identity', 'IdentityN', 'VarHandleOp',
'VarIsInitializedOp', 'AssignVariableOp', 'ReadVariableOp', 'StringJoin', 'ShardedFilename', 'SaveV2',
'MergeV2Checkpoints', 'RestoreV2']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
Pad 2 Yes
RandomUniform 54 Yes
Sub 54 Yes
Mul 54 Yes
Add 54 Yes
Conv2D 53 Yes
BiasAdd 54 Yes
FusedBatchNormV3 53 Yes
Relu 49 Yes
MaxPool 1 Yes
AddV2 16 Yes
Fill 56 Yes
Mean 1 Yes
MatMul 1 Yes
Softmax 1 Yes
Pack 1 Yes
* Total inference operations: 504
* Total Neuron supported inference operations: 504
* Percent of total inference operations supported by Neuron: 100.0
4. You can also check the compiled model to see the number of pipeline NeuronCores for each subgraph:
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50_neuron/
* Found 1 Neuron subgraph(s) (NeuronOp(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['Placeholder', 'PlaceholderWithDefault', 'NoOp', 'Const', 'Identity', 'IdentityN', 'VarHandleOp',
'VarIsInitializedOp', 'AssignVariableOp', 'ReadVariableOp', 'StringJoin', 'ShardedFilename', 'SaveV2',
'MergeV2Checkpoints', 'RestoreV2']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
NeuronOp 1 Yes
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 1
5. When showing subgraph information, you can use --expand_subgraph to show operation types in each subgraph:
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50_neuron/ --expand_subgraph
(output truncated to show subgraph information only)
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 1
Op Type Num Instances
------- -------------
MatMul 1
Relu 49
Add 16
FusedBatchNorm 53
BiasAdd 54
Conv2D 53
Pad 2
Mean 1
MaxPool 1
Softmax 1
6. Use --show_names to see full operation names (caution: this option will generate many lines of output for a large model):
::
$ python tf_neuron_check_model.py ws_resnet50/resnet50_neuron/ --show_names
* Found 1 Neuron subgraph(s) (NeuronOp(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation name and whether the type of that operation is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['Placeholder', 'PlaceholderWithDefault', 'NoOp', 'Const', 'Identity', 'IdentityN', 'VarHandleOp',
'VarIsInitializedOp', 'AssignVariableOp', 'ReadVariableOp', 'StringJoin', 'ShardedFilename', 'SaveV2',
'MergeV2Checkpoints', 'RestoreV2']
Op Name Op Type Neuron Supported ?
------- ------- ------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 NeuronOp Yes
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
conv5_block3_3_bn/FusedBatchNormV3/ReadVariableOp/neuron_op_d6f098c01c780733 1
MXNet-Neuron Check Model
=======================
The following example shows how to run MXNet-Neuron Check Model tool with MXNet ResNet50 tutorial.
1. Start with the MXNet ResNet50 tutorial at :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb` and do the first three steps of the tutorial.
Please stay in the Python environment that you setup during the tutorial.
2. Download the tool:
::
wget https://raw.githubusercontent.com/aws/aws-neuron-sdk/master/src/neuron-gatherinfo/mx_neuron_check_model.py
python mx_neuron_check_model.py -h
::
usage: mx_neuron_check_model.py [-h] [--show_names] [--expand_subgraph]
model_path
positional arguments:
model_path path prefix to MXNet model (the part before -symbol.json)
optional arguments:
-h, --help show this help message and exit
--show_names list operation by name instead of summarizing by type
(caution: this option will generate many lines of output
for a large model).
--expand_subgraph show subgraph operations.
3. After step 3 of MXNet ResNet50 tutorial, you can check the uncompiled model to see Neuron supported operations:
::
$ python mx_neuron_check_model.py resnet-50
* The following table shows the supported and unsupported operations within this uncompiled model.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['null']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
BatchNorm 51 Yes
Convolution 53 Yes
Activation 50 Yes
Pooling 2 Yes
elemwise_add 16 Yes
Flatten 1 Yes
FullyConnected 1 Yes
SoftmaxOutput 1 No
* Total inference operations: 175
* Total Neuron supported inference operations: 174
* Percent of total inference operations supported by Neuron: 99.4
4. You can also check the compiled model to see the number of pipeline NeuronCores for each subgraph:
::
$ python mx_neuron_check_model.py resnet-50_compiled
* Found 1 Neuron subgraph(s) (_neuron_subgraph_op(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation type, the number of instances of that type within model,
* and whether the type is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['null']
Op Type Num Instances Neuron Supported ?
------- ------------- ------------------
_neuron_subgraph_op 1 Yes
SoftmaxOutput 1 No
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
_neuron_subgraph_op0 1
5. When showing subgraph information, you can use --expand_subgraph to show operation types in each subgraph:
::
$ python mx_neuron_check_model.py resnet-50_compiled --expand_subgraph
(output truncated to show subgraph information only)
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
_neuron_subgraph_op0 1
Op Type Num Instances
------- -------------
BatchNorm 51
Convolution 53
Activation 50
Pooling 2
elemwise_add 16
Flatten 1
FullyConnected 1
6. Use --show_names to see full operation names (caution: this option will generate many lines of output for a large model):
::
$ python mx_neuron_check_model.py resnet-50_compiled --show_names
* Found 1 Neuron subgraph(s) (_neuron_subgraph_op(s)) in this compiled model.
* Use this tool on the original uncompiled model to see Neuron supported operations.
* The following table shows all operations, including Neuron subgraphs.
* Each line shows an operation name and whether the type of that operation is supported in Neuron.
* Some operation types are excluded from table because they are no-operations or training-related operations:
['null']
Op Name Op Type Neuron Supported ?
------- ------- ------------------
_neuron_subgraph_op0 _neuron_subgraph_op Yes
softmax SoftmaxOutput No
* Please run this model on Inf1 instance with at least 1 NeuronCore(s).
* The following list show each Neuron subgraph with number of pipelined NeuronCores used by subgraph
* (and subgraph operations if --expand_subgraph is used):
Subgraph Name Num Pipelined NeuronCores
------------- -------------------------
_neuron_subgraph_op0 1
</pre></body></html>
|
2023-09-29T20:55:00.690Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/release-notes/tools/tensorboard-neuron.rst.txt
|
```
.. _neuron-tensorboard-rn:
Neuron Plugin for TensorBoard Release Notes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. contents:: Table of Contents
:local:
:depth: 1
Known Issues and Limitations - Updated 11/29/2022
================================================
The following are not limitations in the Neuron plugin, but may affect your ability to
use TensorBoard.
- The Neuron plugin for Trn1 (``tensorboard-plugin-neuronx``) is not compatible with the Neuron plugin
for Inf1 (``tensorboard-plugin-neuron``). Please ensure you only have only the correct package installed.
Neuron Plugin for TensorBoard release [2.5.39.0]
===============================================
Date: 7/19/2023
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.5.37.0]
===============================================
Date: 6/14/2023
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.5.26.0]
================================================
Date: 05/01/2023
Summary
-------
* Neuron operator timeline view now includes Neuron Runtime setup/teardown time and a collapsed execution of NC engines and DMA - see Tensorboard tutorial for updated views.
* Improved execution categorization to include "control" instructions
Neuron Plugin for TensorBoard release [2.5.25.0]
================================================
Date: 03/28/2023
Summary
-------
- Supports INF2 and TRN1.
Neuron Plugin for TensorBoard release [2.5.0.0]
===============================================
Date: 12/09/2022
Summary
-------
- Added support for PyTorch Neuron on Trn1 (``torch-neuronx``) with new views! Includes a trace view,
an operator view, and an operator timeline view. For more info, check out the documentation
:ref:`neuronx-plugin-tensorboard`.
.. important::
- You must update to the latest Neuron Tools (``aws-neuronx-tools`` version 2.6 or newer) and install
``tensorboard-plugin-neuronx`` for proper functionality of the Neuron plugin on Trn1.
- For Inf1, please continue to use ``tensorboard-plugin-neuron``. Refer to the getting started guide
on Inf1 :ref:`neuron-plugin-tensorboard`.
Neuron Plugin for TensorBoard release [2.4.0.0]
===============================================
Date: 04/29/2022
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.3.0.0]
===============================================
Date: 03/25/2022
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.2.0.0]
===============================================
Date: 10/27/2021
New in this release
-------------------
- Neuron Plugin for TensorBoard now support applications built with Neuron Runtime 2.x (``libnrt.so``).
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
[2.1.2.0]
=========
Date: 8/12/2021
Summary
-------
- Adds support for Neuron Tensorflow 2.5+
.. _2.1.0.0:
[2.1.0.0]
=========
Date: 5/28/2021
Summary
-------
- No major changes or fixes. Released with other Neuron packages.
.. _2.0.29.0:
[2.0.29.0]
=========
Date: 4/30/2021
Summary
-------
- First release Neuron plugin for TensorBoard. Check out it out here:
:ref:`neuron-plugin-tensorboard`.
- The Neuron plugin is now compatible with TensorBoard 2.0 and higher,
in addition to TensorBoard 1.15
- Provides a centralized place to better understand execution using
Neuron SDK.
- Continues support visualization for TensorFlow graphs, with support
for PyTorch and MXNet coming in future releases.
- Neuron plugin for TensorBoard is supported for Neuron tools >= 1.5, which is first
introduced in Neuron v1.13.0 release
- TensorBoard-Neuron is deprecated, and only supported for Neuron tools <= 1.4.12.0.
The final version, 1.4.12.0 is part of Neuron v1.12.2 release.
.. _11501260:
[1.15.0.1.2.6.0]
================
Date: 2/24/2021
Summary
-------
- Fix for CVE-2021-3177.
.. _11501110:
[1.15.0.1.1.1.0]
================
Date: 12/23/2020
Summary
-------
- Minor internal improvements.
.. _1150106150:
[1.15.0.1.0.615.0]
==================
Date: 11/17/2020
Summary
-------
- Fix issue with viewing chrome trace in Neuron profile plugin in
Chrome 80+.
Resolved Issues
---------------
- Updated dependencies to polyfill missing APIs used by chrome trace in
newer browser versions.
.. _1150106000:
[1.15.0.1.0.600.0]
==================
Date: 09/22/2020
Summary
-------
- Minor internal improvements.
.. _1150105700:
[1.15.0.1.0.570.0]
==================
Date: 08/08/2020
.. _summary-1:
Summary
-------
- Minor internal improvements.
.. _1150105130:
[1.15.0.1.0.513.0]
==================
Date: 07/16/2020
.. _summary-2:
Summary
-------
- Minor internal improvements.
.. _1150104910:
[1.15.0.1.0.491.0]
==================
Date 6/11/2020
.. _summary-3:
Summary
-------
Fix issue where utilization was missing in the op-profile view.
Resolved Issues
---------------
- The op-profile view in the Neuron Profile plugin now correctly shows
the overall NeuronCore utilization.
.. _1150104660:
[1.15.0.1.0.466.0]
==================
Date 5/11/2020
.. _summary-4:
Summary
-------
Fix potential installation issue when installing both tensorboard and
tensorboard-neuron.
.. _resolved-issues-1:
Resolved Issues
---------------
- Added tensorboard as a dependency in tensorboard-neuron. This
prevents the issue of overwriting tensorboard-neuron features when
tensorboard is installed after tensorboard-neuron.
Other Notes
-----------
.. _1150103920:
[1.15.0.1.0.392.0]
==================
Date 3/26/2020
.. _summary-5:
Summary
-------
Added ability to view CPU node latency in the Graphs plugin and the
Neuron Profile plugins.
Major New Features
------------------
- Added an aggregate view in addition to the current Neuron subgraph
view for both the Graphs plugin and the Neuron Profile plugin.
- When visualizing a graph executed on a Neuron device, CPU node
latencies are available when coloring the graph by "Compute time"
using the "neuron_profile" tag.
- The Neuron Profile plugin now has an overview page to compare time
spent on Neuron device versus on CPU.
.. _other-notes-1:
Other Notes
-----------
- Requires Neuron-RTD config option "enable_node_profiling" to be set
to "true"
.. _1150103660:
[1.15.0.1.0.366.0]
==================
Date 02/27/2020
.. _summary-6:
Summary
-------
Reduced load times and fixed crashes when loading large models for
visualization.
.. _resolved-issues-2:
Resolved Issues
---------------
- Enable large attribute filtering by default
- Reduced load time for graphs with attributes larger than 1 KB
- Fixed a fail to load graphs with many large attributes totaling more
than 1 GB in size
.. _1150103150:
[1.15.0.1.0.315.0]
==================
Date 12/20/2019
.. _summary-7:
Summary
-------
No major chages or fixes. Released with other Neuron packages.
.. _1150103060:
[1.15.0.1.0.306.0]
==================
Date 12/1/2019
.. _summary-8:
Summary
-------
.. _major-new-features-1:
Major New Features
------------------
.. _resolved-issues-3:
Resolved Issues
---------------
.. _known-issues--limits:
Known Issues & Limits
---------------------
Same as prior release
.. _other-notes-2:
Other Notes
-----------
.. _1150102800:
[1.15.0.1.0.280.0]
==================
Date 11/29/2019
.. _summary-9:
Summary
-------
Initial release packaged with DLAMI.
.. _major-new-features-2:
Major New Features
------------------
N/A, initial release.
See user guide here:
https://github.com/aws/aws-neuron-sdk/blob/master/docs/neuron-tools/getting-started-tensorboard-neuron.md
.. _resolved-issues-4:
Resolved Issues
---------------
N/A - first release
.. _known-issues--limits-1:
Known Issues & Limits
---------------------
- Must install TensorBoard-Neuron by itself, or after regular
TensorBoard is installed. If regular Tensorboard is installed after
TensorBoard-Neuron, it may overwrite some needed files.
- Utilization missing in Op Profile due to missing FLOPs calculation
(see overview page instead)
- Neuron Profile plugin may not immediately show up on launch (try
reloading the page)
- Graphs with NeuronOps may take a long time to load due to attribute
size
- Instructions that cannot be matched to a framework layer/operator
name show as “” (blank)
- CPU Usage section in chrome-trace is not applicable
- Debugger currently supports TensorFlow only
- Visualization requires a TensorFlow-compatible graph
.. _other-notes-3:
Other Notes
-----------
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-tensorboard-rn:
Neuron Plugin for TensorBoard Release Notes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. contents:: Table of Contents
:local:
:depth: 1
Known Issues and Limitations - Updated 11/29/2022
================================================
The following are not limitations in the Neuron plugin, but may affect your ability to
use TensorBoard.
- The Neuron plugin for Trn1 (``tensorboard-plugin-neuronx``) is not compatible with the Neuron plugin
for Inf1 (``tensorboard-plugin-neuron``). Please ensure you only have only the correct package installed.
Neuron Plugin for TensorBoard release [2.5.39.0]
===============================================
Date: 7/19/2023
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.5.37.0]
===============================================
Date: 6/14/2023
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.5.26.0]
================================================
Date: 05/01/2023
Summary
-------
* Neuron operator timeline view now includes Neuron Runtime setup/teardown time and a collapsed execution of NC engines and DMA - see Tensorboard tutorial for updated views.
* Improved execution categorization to include "control" instructions
Neuron Plugin for TensorBoard release [2.5.25.0]
================================================
Date: 03/28/2023
Summary
-------
- Supports INF2 and TRN1.
Neuron Plugin for TensorBoard release [2.5.0.0]
===============================================
Date: 12/09/2022
Summary
-------
- Added support for PyTorch Neuron on Trn1 (``torch-neuronx``) with new views! Includes a trace view,
an operator view, and an operator timeline view. For more info, check out the documentation
:ref:`neuronx-plugin-tensorboard`.
.. important::
- You must update to the latest Neuron Tools (``aws-neuronx-tools`` version 2.6 or newer) and install
``tensorboard-plugin-neuronx`` for proper functionality of the Neuron plugin on Trn1.
- For Inf1, please continue to use ``tensorboard-plugin-neuron``. Refer to the getting started guide
on Inf1 :ref:`neuron-plugin-tensorboard`.
Neuron Plugin for TensorBoard release [2.4.0.0]
===============================================
Date: 04/29/2022
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.3.0.0]
===============================================
Date: 03/25/2022
Summary
-------
- Minor updates.
Neuron Plugin for TensorBoard release [2.2.0.0]
===============================================
Date: 10/27/2021
New in this release
-------------------
- Neuron Plugin for TensorBoard now support applications built with Neuron Runtime 2.x (``libnrt.so``).
.. important::
- You must update to the latest Neuron Driver (``aws-neuron-dkms`` version 2.1 or newer)
for proper functionality of the new runtime library.
- Read :ref:`introduce-libnrt`
application note that describes :ref:`why are we making this
change <introduce-libnrt-why>` and
how :ref:`this change will affect the Neuron
SDK <introduce-libnrt-how-sdk>` in detail.
- Read :ref:`neuron-migrating-apps-neuron-to-libnrt` for detailed information of how to
migrate your application.
[2.1.2.0]
=========
Date: 8/12/2021
Summary
-------
- Adds support for Neuron Tensorflow 2.5+
.. _2.1.0.0:
[2.1.0.0]
=========
Date: 5/28/2021
Summary
-------
- No major changes or fixes. Released with other Neuron packages.
.. _2.0.29.0:
[2.0.29.0]
=========
Date: 4/30/2021
Summary
-------
- First release Neuron plugin for TensorBoard. Check out it out here:
:ref:`neuron-plugin-tensorboard`.
- The Neuron plugin is now compatible with TensorBoard 2.0 and higher,
in addition to TensorBoard 1.15
- Provides a centralized place to better understand execution using
Neuron SDK.
- Continues support visualization for TensorFlow graphs, with support
for PyTorch and MXNet coming in future releases.
- Neuron plugin for TensorBoard is supported for Neuron tools >= 1.5, which is first
introduced in Neuron v1.13.0 release
- TensorBoard-Neuron is deprecated, and only supported for Neuron tools <= 1.4.12.0.
The final version, 1.4.12.0 is part of Neuron v1.12.2 release.
.. _11501260:
[1.15.0.1.2.6.0]
================
Date: 2/24/2021
Summary
-------
- Fix for CVE-2021-3177.
.. _11501110:
[1.15.0.1.1.1.0]
================
Date: 12/23/2020
Summary
-------
- Minor internal improvements.
.. _1150106150:
[1.15.0.1.0.615.0]
==================
Date: 11/17/2020
Summary
-------
- Fix issue with viewing chrome trace in Neuron profile plugin in
Chrome 80+.
Resolved Issues
---------------
- Updated dependencies to polyfill missing APIs used by chrome trace in
newer browser versions.
.. _1150106000:
[1.15.0.1.0.600.0]
==================
Date: 09/22/2020
Summary
-------
- Minor internal improvements.
.. _1150105700:
[1.15.0.1.0.570.0]
==================
Date: 08/08/2020
.. _summary-1:
Summary
-------
- Minor internal improvements.
.. _1150105130:
[1.15.0.1.0.513.0]
==================
Date: 07/16/2020
.. _summary-2:
Summary
-------
- Minor internal improvements.
.. _1150104910:
[1.15.0.1.0.491.0]
==================
Date 6/11/2020
.. _summary-3:
Summary
-------
Fix issue where utilization was missing in the op-profile view.
Resolved Issues
---------------
- The op-profile view in the Neuron Profile plugin now correctly shows
the overall NeuronCore utilization.
.. _1150104660:
[1.15.0.1.0.466.0]
==================
Date 5/11/2020
.. _summary-4:
Summary
-------
Fix potential installation issue when installing both tensorboard and
tensorboard-neuron.
.. _resolved-issues-1:
Resolved Issues
---------------
- Added tensorboard as a dependency in tensorboard-neuron. This
prevents the issue of overwriting tensorboard-neuron features when
tensorboard is installed after tensorboard-neuron.
Other Notes
-----------
.. _1150103920:
[1.15.0.1.0.392.0]
==================
Date 3/26/2020
.. _summary-5:
Summary
-------
Added ability to view CPU node latency in the Graphs plugin and the
Neuron Profile plugins.
Major New Features
------------------
- Added an aggregate view in addition to the current Neuron subgraph
view for both the Graphs plugin and the Neuron Profile plugin.
- When visualizing a graph executed on a Neuron device, CPU node
latencies are available when coloring the graph by "Compute time"
using the "neuron_profile" tag.
- The Neuron Profile plugin now has an overview page to compare time
spent on Neuron device versus on CPU.
.. _other-notes-1:
Other Notes
-----------
- Requires Neuron-RTD config option "enable_node_profiling" to be set
to "true"
.. _1150103660:
[1.15.0.1.0.366.0]
==================
Date 02/27/2020
.. _summary-6:
Summary
-------
Reduced load times and fixed crashes when loading large models for
visualization.
.. _resolved-issues-2:
Resolved Issues
---------------
- Enable large attribute filtering by default
- Reduced load time for graphs with attributes larger than 1 KB
- Fixed a fail to load graphs with many large attributes totaling more
than 1 GB in size
.. _1150103150:
[1.15.0.1.0.315.0]
==================
Date 12/20/2019
.. _summary-7:
Summary
-------
No major chages or fixes. Released with other Neuron packages.
.. _1150103060:
[1.15.0.1.0.306.0]
==================
Date 12/1/2019
.. _summary-8:
Summary
-------
.. _major-new-features-1:
Major New Features
------------------
.. _resolved-issues-3:
Resolved Issues
---------------
.. _known-issues--limits:
Known Issues & Limits
---------------------
Same as prior release
.. _other-notes-2:
Other Notes
-----------
.. _1150102800:
[1.15.0.1.0.280.0]
==================
Date 11/29/2019
.. _summary-9:
Summary
-------
Initial release packaged with DLAMI.
.. _major-new-features-2:
Major New Features
------------------
N/A, initial release.
See user guide here:
https://github.com/aws/aws-neuron-sdk/blob/master/docs/neuron-tools/getting-started-tensorboard-neuron.md
.. _resolved-issues-4:
Resolved Issues
---------------
N/A - first release
.. _known-issues--limits-1:
Known Issues & Limits
---------------------
- Must install TensorBoard-Neuron by itself, or after regular
TensorBoard is installed. If regular Tensorboard is installed after
TensorBoard-Neuron, it may overwrite some needed files.
- Utilization missing in Op Profile due to missing FLOPs calculation
(see overview page instead)
- Neuron Profile plugin may not immediately show up on launch (try
reloading the page)
- Graphs with NeuronOps may take a long time to load due to attribute
size
- Instructions that cannot be matched to a framework layer/operator
name show as “” (blank)
- CPU Usage section in chrome-trace is not applicable
- Debugger currently supports TensorFlow only
- Visualization requires a TensorFlow-compatible graph
.. _other-notes-3:
Other Notes
-----------
</pre></body></html>
|
2023-09-29T20:55:00.746Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/helper-tools/tutorial-neuron-gatherinfo.rst.txt
|
```
.. _neuron_gatherinfo:
Using Neuron GatherInfo Tool to collect debug and support information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Overview
========
The Neuron GatherInfo tool ``neuron-gatherinfo.py`` can assist in
automating the collection and packaging of information from Neuron SDK
tools that is useful to both user and AWS for issue resolution. The tool
gathers log files and other system information. If being used to supply
that info to AWS, the tool will redact proprietary and confidential
information. The GatherInfo tool is supplied in source code form -
available here: :github:`Neuron Gatherinfo </src/neuron-gatherinfo/neuron-gatherinfo.py>`
The tool enables developers to gather compiler and inference/runtime
logs. Additionally, the common usage is from within one of the supported
ML frameworks that have been integrated with Neuron, and information can
be captured from those compile/runtime environments using the
frameworks.
Steps Overview:
~~~~~~~~~~~~~~~
1. Obtain a copy of neuron-gatherinfo.py from
:github:`Neuron Gatherinfo </src/neuron-gatherinfo/neuron-gatherinfo.py>`
2. Install into a location in your $PATH or into a location from where
you can launch the script
3. Use with compile and/or runtime environments
Neuron-CC information gathering
-------------------------------
Step 1: Re-run the compile steps for your workload with increased verbosity or debug levels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- For TensorFlow-Neuron, change the Python code as shown. Note that
‘compiler-workdir’ is expected to be an empty directory to prevent
files from other runs from interfering with the information
gathering. The call to the compile function has to be augmented with
the **verbose** and the \**compiler_workdir \**arguments. In
addition, please capture the stdout messages into a file (for
example, by redirecting the stdout to a file)
::
tfn.saved_model.compile(model_dir, compiled_model_dir, compiler_args=['--verbose', '2', '--pipeline', 'compile', 'SaveTemps'], compiler_workdir='./compiler-workdir')
- For Neuron Apache MXNet (Incubating), add compiler arguments as shown below and run the
compilation process from an empty workdir:
::
import mxnet as mx
import os
from packaging import version
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
else:
from mxnet.contrib import neuron
...
os.environ['SUBGRAPH_INFO'] = '1'
compile_args = { '--verbose' : 2, '--pipeline' : 'compile', 'flags' : ['SaveTemps'] }
csym, cargs, cauxs = neuron.compile(sym, args, auxs, inputs=inputs, **compile_args)
.. _step-2-run-neuron-gatherinfopy-to-gather-information-to-share:
Step 2: Run neuron-gatherinfo.py to gather information to share
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The output result will be a tar.gz file.
Neuron Runtime information gathering
------------------------------------
Step 1: EXECUTE inference steps for your workload with increased verbosity or debug levels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the case of runtime information, the tool **neuron-dump.py** is used
by \**neuron-gatherinfo.py \**to gather that information. Make sure that
you have the neuron tools package (aws-neuron-tools) installed.
.. _step-2-run-neuron-gatherinfopy-to-gather-information-to-share-1:
Step 2: Run neuron-gatherinfo.py to gather information to share
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The output result will be a tar.gz file.
Tool Usage Reference
====================
Run neuron-gatherinfo.py using the “—help“ option:
::
bash $ ~/bin/neuron-gatherinfo.py --help
usage: neuron-gatherinfo.py [-h] [--additionalfileordir ADDFLDIR] [-c CCDIR]
[-i] [-f FILTERFILE] [-m] -o OUTDIR [-r RTDIR] -s
STDOUT [-v]
Usage: /home/user/bin/neuron-gatherinfo.py [options]
This program is used to gather information from this system for analysis
and debugging
optional arguments:
-h, --help show this help message and exit
--additionalfileordir ADDFLDIR
Additional file or directory that the user wants to
provide in the archive. The user can sanitize this
file or directory before sharing
-c CCDIR, --compileroutdir CCDIR
Location of the neuron-cc generated files
-i, --include By default, only the lines containing (grep) patterns
like 'nrtd|neuron|kernel:' from the syslog are copied.
Other lines are excluded. Using this option allows the
timestamp section of other lines to be included. The
rest of the contents of the line itself are elided.
Providing the timestamp section may provide time
continuity while viewing the copied syslog file
-f FILTERFILE, --filter FILTERFILE
-m, --modeldata By using this option, the entire compiler work
directory's contents will be included (excluding the
.pb files, unless an additional option is used). This
would include model information, etc. The files that
are included, by default, are these: graph_def.neuron-
cc.log, all_metrics.csv, hh-tr-operand-
tensortensor.json
-o OUTDIR, --out OUTDIR
The output directory where all the files and other
information will be stored. The output will be stored
as an archive as well as the actual directory where
all the contents are copied. This will allow a simple
audit of the files, if necessary. *** N O T E ***:
Make sure that this directory has enough space to hold
the files and resulting archive
-r RTDIR, --runtimeoutdir RTDIR
Location of the neuron runtime generated files
-s STDOUT, --stdout STDOUT
The file where the stdout of the compiler run was
saved
-v, --verbose Verbose mode displays commands executed and any
additional information which may be useful in
debugging the tool itself
Examples
========
Example 1: no ML model information gathered (default behavior)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case, the tool will archive just the default information
gathering:
::
bash $ sudo ~/bin/neuron-gatherinfo.py -o compile-and-run-info-for-debugging-no-model-info -i --verbose -s stdout-from-compile_resnet50.out -c compiler-workdir
Running cmd: lscpu and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-lscpu.txt
Running cmd: lshw and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-lshw.txt
Running cmd: lspci | grep -i Amazon and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-lspci.txt
Running cmd: neuron-cc --version and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-cc.txt
Running cmd: neuron-ls and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-ls.txt
<SNIP>
******
Archive created at:
/home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo.tar.gz
From directory:
/home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo
******
.. _example-2--model-ml-information-gathered-using-the-modeldata-option:
Example 2 : model ML information gathered using the “—modeldata” option
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case, the tool will archive the compiler work directory in
addition to the default information gathering
::
bash $ sudo ~/bin/neuron-gatherinfo.py -o compile-and-run-info-for-debugging -i --verbose -s stdout-from-compile_resnet50.out -c compiler-workdir --modeldata
<SNIP>
Running cmd: lscpu and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo/report-lscpu.txt
Running cmd: lshw and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo/report-lshw.txt
Running cmd: lspci | grep -i Amazon and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo/report-lspci.txt
Running cmd: neuron-cc --version and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-cc.txt
Running cmd: neuron-ls and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-ls.txt
<SNIP>
******
Archive created at:
/home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo.tar.gz
From directory:
/home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo
******
**************************
Based on your command line option, we're also packaging these files:
graph_def.neuron-cc.log
all_metrics.csv
hh-tr-operand-tensortensor.json
And this directory: /home/user/tutorials-3/compiler-workdir
**************************
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron_gatherinfo:
Using Neuron GatherInfo Tool to collect debug and support information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Overview
========
The Neuron GatherInfo tool ``neuron-gatherinfo.py`` can assist in
automating the collection and packaging of information from Neuron SDK
tools that is useful to both user and AWS for issue resolution. The tool
gathers log files and other system information. If being used to supply
that info to AWS, the tool will redact proprietary and confidential
information. The GatherInfo tool is supplied in source code form -
available here: :github:`Neuron Gatherinfo </src/neuron-gatherinfo/neuron-gatherinfo.py>`
The tool enables developers to gather compiler and inference/runtime
logs. Additionally, the common usage is from within one of the supported
ML frameworks that have been integrated with Neuron, and information can
be captured from those compile/runtime environments using the
frameworks.
Steps Overview:
~~~~~~~~~~~~~~~
1. Obtain a copy of neuron-gatherinfo.py from
:github:`Neuron Gatherinfo </src/neuron-gatherinfo/neuron-gatherinfo.py>`
2. Install into a location in your $PATH or into a location from where
you can launch the script
3. Use with compile and/or runtime environments
Neuron-CC information gathering
-------------------------------
Step 1: Re-run the compile steps for your workload with increased verbosity or debug levels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- For TensorFlow-Neuron, change the Python code as shown. Note that
‘compiler-workdir’ is expected to be an empty directory to prevent
files from other runs from interfering with the information
gathering. The call to the compile function has to be augmented with
the **verbose** and the \**compiler_workdir \**arguments. In
addition, please capture the stdout messages into a file (for
example, by redirecting the stdout to a file)
::
tfn.saved_model.compile(model_dir, compiled_model_dir, compiler_args=['--verbose', '2', '--pipeline', 'compile', 'SaveTemps'], compiler_workdir='./compiler-workdir')
- For Neuron Apache MXNet (Incubating), add compiler arguments as shown below and run the
compilation process from an empty workdir:
::
import mxnet as mx
import os
from packaging import version
mxnet_version = version.parse(mx.__version__)
if mxnet_version >= version.parse("1.8"):
import mx_neuron as neuron
else:
from mxnet.contrib import neuron
...
os.environ['SUBGRAPH_INFO'] = '1'
compile_args = { '--verbose' : 2, '--pipeline' : 'compile', 'flags' : ['SaveTemps'] }
csym, cargs, cauxs = neuron.compile(sym, args, auxs, inputs=inputs, **compile_args)
.. _step-2-run-neuron-gatherinfopy-to-gather-information-to-share:
Step 2: Run neuron-gatherinfo.py to gather information to share
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The output result will be a tar.gz file.
Neuron Runtime information gathering
------------------------------------
Step 1: EXECUTE inference steps for your workload with increased verbosity or debug levels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the case of runtime information, the tool **neuron-dump.py** is used
by \**neuron-gatherinfo.py \**to gather that information. Make sure that
you have the neuron tools package (aws-neuron-tools) installed.
.. _step-2-run-neuron-gatherinfopy-to-gather-information-to-share-1:
Step 2: Run neuron-gatherinfo.py to gather information to share
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The output result will be a tar.gz file.
Tool Usage Reference
====================
Run neuron-gatherinfo.py using the “—help“ option:
::
bash $ ~/bin/neuron-gatherinfo.py --help
usage: neuron-gatherinfo.py [-h] [--additionalfileordir ADDFLDIR] [-c CCDIR]
[-i] [-f FILTERFILE] [-m] -o OUTDIR [-r RTDIR] -s
STDOUT [-v]
Usage: /home/user/bin/neuron-gatherinfo.py [options]
This program is used to gather information from this system for analysis
and debugging
optional arguments:
-h, --help show this help message and exit
--additionalfileordir ADDFLDIR
Additional file or directory that the user wants to
provide in the archive. The user can sanitize this
file or directory before sharing
-c CCDIR, --compileroutdir CCDIR
Location of the neuron-cc generated files
-i, --include By default, only the lines containing (grep) patterns
like 'nrtd|neuron|kernel:' from the syslog are copied.
Other lines are excluded. Using this option allows the
timestamp section of other lines to be included. The
rest of the contents of the line itself are elided.
Providing the timestamp section may provide time
continuity while viewing the copied syslog file
-f FILTERFILE, --filter FILTERFILE
-m, --modeldata By using this option, the entire compiler work
directory's contents will be included (excluding the
.pb files, unless an additional option is used). This
would include model information, etc. The files that
are included, by default, are these: graph_def.neuron-
cc.log, all_metrics.csv, hh-tr-operand-
tensortensor.json
-o OUTDIR, --out OUTDIR
The output directory where all the files and other
information will be stored. The output will be stored
as an archive as well as the actual directory where
all the contents are copied. This will allow a simple
audit of the files, if necessary. *** N O T E ***:
Make sure that this directory has enough space to hold
the files and resulting archive
-r RTDIR, --runtimeoutdir RTDIR
Location of the neuron runtime generated files
-s STDOUT, --stdout STDOUT
The file where the stdout of the compiler run was
saved
-v, --verbose Verbose mode displays commands executed and any
additional information which may be useful in
debugging the tool itself
Examples
========
Example 1: no ML model information gathered (default behavior)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case, the tool will archive just the default information
gathering:
::
bash $ sudo ~/bin/neuron-gatherinfo.py -o compile-and-run-info-for-debugging-no-model-info -i --verbose -s stdout-from-compile_resnet50.out -c compiler-workdir
Running cmd: lscpu and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-lscpu.txt
Running cmd: lshw and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-lshw.txt
Running cmd: lspci | grep -i Amazon and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-lspci.txt
Running cmd: neuron-cc --version and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-cc.txt
Running cmd: neuron-ls and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-ls.txt
<SNIP>
******
Archive created at:
/home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo.tar.gz
From directory:
/home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo
******
.. _example-2--model-ml-information-gathered-using-the-modeldata-option:
Example 2 : model ML information gathered using the “—modeldata” option
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this case, the tool will archive the compiler work directory in
addition to the default information gathering
::
bash $ sudo ~/bin/neuron-gatherinfo.py -o compile-and-run-info-for-debugging -i --verbose -s stdout-from-compile_resnet50.out -c compiler-workdir --modeldata
<SNIP>
Running cmd: lscpu and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo/report-lscpu.txt
Running cmd: lshw and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo/report-lshw.txt
Running cmd: lspci | grep -i Amazon and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo/report-lspci.txt
Running cmd: neuron-cc --version and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-cc.txt
Running cmd: neuron-ls and capturing output in file: /home/user/tutorials-3/compile-and-run-info-for-debugging-no-model-info/neuron-gatherinfo/report-neuron-ls.txt
<SNIP>
******
Archive created at:
/home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo.tar.gz
From directory:
/home/user/tutorials-3/compile-and-run-info-for-debugging/neuron-gatherinfo
******
**************************
Based on your command line option, we're also packaging these files:
graph_def.neuron-cc.log
all_metrics.csv
hh-tr-operand-tensortensor.json
And this directory: /home/user/tutorials-3/compiler-workdir
**************************
</pre></body></html>
|
2023-09-29T20:55:00.764Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/tensorboard/getting-started-tensorboard-neuron-plugin.rst.txt
|
```
.. _neuron-plugin-tensorboard:
Neuron Plugin for TensorBoard (Inf1)
====================================
.. contents:: Table of Contents
:local:
:depth: 2
Overview
--------
This guide is for developers who want to better understand how their
model is executed using Neuron SDK through TensorBoard.
The Neuron plugin for TensorBoard provides metrics to the performance of machine learning tasks accelerated using the Neuron SDK. It is
compatible with TensorBoard versions 1.15 and higher. It provides visualizations and profiling results for graphs executed on NeuronCores.
.. note::
The following information is compatible with Neuron SDK for Inf1. For a walkthrough on the latest version, please check out the guide
:ref:`neuronx-plugin-tensorboard`.
.. note::
Graph visualization is currently only supported for TensorFlow-Neuron. Support
for MXNet-Neuron and PyTorch-Neuron visualization will be added in a future
release.
Compile the neural network
--------------------------
3. Refer to the following guides on how to compile a graph using Neuron SDK.
- TensorFlow-Neuron
- :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb`
- PyTorch-Neuron:
- "Compile model for Neuron" in `PyTorch-Neuron Resnet50 Tutorial`_
- MXNet-Neuron:
- :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb`
Enable profiling
-----------------
In this step, we enable Neuron profile data collection and collect results
from executing an inference.
4.1. To start profiling the neural network and collect inference traces, create a
directory where profile data will be dumped and set the ``NEURON_PROFILE`` environment
variable. In this example, we will assume this directory is ``$HOME/profile``
.. code:: bash
mkdir -p $HOME/profile
export NEURON_PROFILE=$HOME/profile
4.2. Ensure Neuron Tools are executable by setting the ``PATH`` environment variable.
.. code:: bash
export PATH=/opt/aws/neuron/bin:$PATH
4.3. Execute inference!
.. note::
Please run the inference script outside of Jupyter notebook. Profiling in
Jupyter notebook is not supported at this time.
.. note::
Please ensure the inference script executes only one inference, as profiling
results are currently only supported for a single inference.
For more info on how to execute inference, refer to the following guides:
- TensorFlow-Neuron
- :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb`
- PyTorch-Neuron
- "Run inference on Single Core" in :ref:`/src/examples/pytorch/resnet50.ipynb`
- MXNet-Neuron
- :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb`
4.4. Check if profiling results were successfully saved. In the directory
pointed to by ``NEURON_PROFILE`` environment variable set in Step 4.1, there
should be at least two files, one with the ``.neff`` extension and one with the
``.ntff`` extension. For TensorFlow-Neuron users, the graph file (``.pb``) will
also be in this directory.
.. code:: bash
ls $NEURON_PROFILE
Launch TensorBoard
------------------
In this step, we will process the Neuron profile data and launch TensorBoard.
5.1. Install the Neuron plugin for Tensorboard.
.. include:: /general/setup/install-templates/inf1/tensorboard-plugin-neuron-pip-install.rst
5.2. After collecting the raw profile data, we need to post-process it to create the
log files used by the Neuron plugin. This can be done when launching TensorBoard
by passing an extra flag ``--run_neuron_profiler``. Using this flag will create the
directory specified by ``--logdir`` and populate it with Neuron plugin data. Please
note that the ``NEURON_PROFILE`` environment variable set in Step 4.1 must still point
to the same directory as before.
.. code:: bash
tensorboard --logdir results --run_neuron_profiler
.. note::
If using TensorBoard >= 2.5, please use the ``--load_fast=false`` option when launching.
``tensorboard --logdir results --run_neuron_profiler --load_fast=false``
5.3. After you see the following message, TensorBoard is ready to use. By default,
TensorBoard will be launched at ``localhost:6006`` on the Deployment Instance.
::
...
Running neuron-profile
Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
TensorBoard 2.4.1 at http://localhost:6006/ (Press CTRL+C to quit)
View results in TensorBoard
---------------------------
In this step, we will view the Neuron plugin for TensorBoard from a browser on your local
development machine.
6.1. Connect to the Deployment Instance while enabling port forwarding. In this example, we
assume TensorBoard has been launched using the default address ``localhost:6006`` on the
Deployment Instance.
.. code:: bash
# if Ubuntu-based AMI
ssh -i <PEM key file> ubuntu@<instance DNS> -L 6006:localhost:6006
# if AL2-based AMI
ssh -i <PEM key file> ec2-user@<instance DNS> -L 6006:localhost:6006
6.2. In a browser, visit |tensorboard_address|.
6.3. In the top navigation bar, switch from ``Graphs`` to ``Neuron``. If it does not show up,
please wait a while and refresh the page while the plugin loads. If the issue persists, check
the ``Inactive`` dropdown list on the right and check for ``Neuron``.
|image1|
6.4. If TensorBoard failed to find the generated logs, you will see the following message:
|image10|
In this case, please check the console output on the Deployment Instance where TensorBoard was
launched for any warnings or error messages, and make sure the version of the ``aws-neuron-tools``
package is compatible.
.. _tensorboard-plugin-visualize-graph:
Visualize graphs executed on Neuron
-----------------------------------
.. _tensorboard-plugin-graph-device:
Show how the graph was partition to run on NeuronCores
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To view how the graph was partitioned to run on NeuronCores, select "Device" under "Graph Color
Schemes" in the left navigation bar.
|image2|
Each operator will be colored according to the device used. In this example, light blue indicates
an operator was executed on CPU, and orange indicates the operator was executed on NeuronCores.
Operators that are white may have been optimized by the Neuron compiler and fused into another
operation.
.. _tensorboard-plugin-graph-time:
Inspect which operators consumes the most time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also view how long each operator took by changing to the "Compute time" color scheme.
|image3|
This view will show time taken by each layer and will be colored according to how much relative
time the layer took to compute. A lighter shade of red means that a relatively small portion of
compute time was spent in this layer, while a darker red shows that more compute time was used.
.. _tensorboard-plugin-graph-supported-ops:
Check out Neuron support operators for each framework
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The "Compatibility" color scheme allows you to better understand what operators are currently
supported by the Neuron compiler - green for compatible ops, red for incompatible ops, and yellow
for subgraphs that contain both compatible and incompatible ops.
|image4|
.. _tensorboard-plugin-graph-filter-device:
Filter view by device
^^^^^^^^^^^^^^^^^^^^^
Additionally, you can choose to filter by CPU and NeuronCores, which will only color ops that
match the selected device(s).
|image5|
Expand/collapse subgraphs and view operator details
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each rectangular node in the graph represents a subgraph that can be expanded or collapse by
clicking on the name. Operators will be represented by ellipses, and can be clicked to reveal
more information on that operator, such as inputs and execution device.
|image11|
The ``Expand All`` and ``Collapse All`` buttons can be used to expand or collapse every subgraph.
When using these features, the positioning of the graph may change when redrawing the new graph.
Try using ``Reset Position`` button and zoom out by scrolling if the graph appears to be missing.
.. _tensorboard-plugin-view-profile:
Viewing the Neuron profile data
-------------------------------
On the right side of the Neuron plugin, information on the profiled inference will be displayed.
.. _tensorboard-plugin-profile-summary:
See performance summary
^^^^^^^^^^^^^^^^^^^^^^^
First is the "Neuron Performance Summary," which gives a quick overview on how Neuron executed the graph,
including information on the number of NeuronCores and both on-NeuronCore time and on-CPU time.
|image6|
.. _tensorboard-plugin-profile-nc:
Get a breakdown of time spent per NeuronCore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Next, the "Neuron Execution" will give more details on how a graph was partitioned for Neuron.
Each entry in the table will show the order it was executed in, what type of device was used, the compute
time (in microseconds), and the percentage of total time spent. To dive deeper into subgraphs, you can
check the "Show Details" box to display the breakdown per NeuronCore.
|image7|
.. _tensorboard-plugin-profile-op:
Get a breakdown of time spent per operator
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The "Op Time Table" section shows the cycle count per operator, much like the "Compute time" coloring
for graph visualization. This table can be sorted by clicking the column names, and searched using the
provided text box in the top right corner. Due to Neuron compiler optimizations, some of the compute may
not be associated with any specific operator and will be categorized as ``unknown``. Additionally, time
spent moving data to and from NeuronCores will fall under ``(ND_ENGINE_LOAD)``.
|image8|
.. |image1| image:: /images/tb-plugin-img1.png
:height: 2914
:width: 5344
:scale: 10%
.. |image2| image:: /images/tb-plugin-img2.png
:height: 2914
:width: 5344
:scale: 10%
.. |image3| image:: /images/tb-plugin-img3.png
:height: 2914
:width: 5344
:scale: 10%
.. |image4| image:: /images/tb-plugin-img4.png
:height: 2914
:width: 5344
:scale: 10%
.. |image5| image:: /images/tb-plugin-img5.png
:height: 2914
:width: 5344
:scale: 10%
.. |image6| image:: /images/tb-plugin-img6.png
:height: 2914
:width: 5344
:scale: 10%
.. |image7| image:: /images/tb-plugin-img7.png
:height: 2914
:width: 5344
:scale: 10%
.. |image8| image:: /images/tb-plugin-img8.png
:height: 2914
:width: 5344
:scale: 10%
.. |image9| image:: /images/tb-plugin-img9.png
:height: 2914
:width: 5344
:scale: 10%
.. |image10| image:: /images/tb-plugin-img10.png
:height: 2914
:width: 5344
:scale: 10%
.. |image11| image:: /images/tb-plugin-img11.png
:height: 2826
:width: 5341
:scale: 10%
.. _PyTorch-Neuron Resnet50 Tutorial: ../../src/examples/pytorch/resnet50.ipynb
.. |tensorboard_address| raw:: html
<a href="http://localhost:6006" target="_blank">localhost:6006</a>
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuron-plugin-tensorboard:
Neuron Plugin for TensorBoard (Inf1)
====================================
.. contents:: Table of Contents
:local:
:depth: 2
Overview
--------
This guide is for developers who want to better understand how their
model is executed using Neuron SDK through TensorBoard.
The Neuron plugin for TensorBoard provides metrics to the performance of machine learning tasks accelerated using the Neuron SDK. It is
compatible with TensorBoard versions 1.15 and higher. It provides visualizations and profiling results for graphs executed on NeuronCores.
.. note::
The following information is compatible with Neuron SDK for Inf1. For a walkthrough on the latest version, please check out the guide
:ref:`neuronx-plugin-tensorboard`.
.. note::
Graph visualization is currently only supported for TensorFlow-Neuron. Support
for MXNet-Neuron and PyTorch-Neuron visualization will be added in a future
release.
Compile the neural network
--------------------------
3. Refer to the following guides on how to compile a graph using Neuron SDK.
- TensorFlow-Neuron
- :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb`
- PyTorch-Neuron:
- "Compile model for Neuron" in `PyTorch-Neuron Resnet50 Tutorial`_
- MXNet-Neuron:
- :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb`
Enable profiling
-----------------
In this step, we enable Neuron profile data collection and collect results
from executing an inference.
4.1. To start profiling the neural network and collect inference traces, create a
directory where profile data will be dumped and set the ``NEURON_PROFILE`` environment
variable. In this example, we will assume this directory is ``$HOME/profile``
.. code:: bash
mkdir -p $HOME/profile
export NEURON_PROFILE=$HOME/profile
4.2. Ensure Neuron Tools are executable by setting the ``PATH`` environment variable.
.. code:: bash
export PATH=/opt/aws/neuron/bin:$PATH
4.3. Execute inference!
.. note::
Please run the inference script outside of Jupyter notebook. Profiling in
Jupyter notebook is not supported at this time.
.. note::
Please ensure the inference script executes only one inference, as profiling
results are currently only supported for a single inference.
For more info on how to execute inference, refer to the following guides:
- TensorFlow-Neuron
- :ref:`/src/examples/tensorflow/tensorflow_resnet50/resnet50.ipynb`
- PyTorch-Neuron
- "Run inference on Single Core" in :ref:`/src/examples/pytorch/resnet50.ipynb`
- MXNet-Neuron
- :ref:`/src/examples/mxnet/resnet50/resnet50.ipynb`
4.4. Check if profiling results were successfully saved. In the directory
pointed to by ``NEURON_PROFILE`` environment variable set in Step 4.1, there
should be at least two files, one with the ``.neff`` extension and one with the
``.ntff`` extension. For TensorFlow-Neuron users, the graph file (``.pb``) will
also be in this directory.
.. code:: bash
ls $NEURON_PROFILE
Launch TensorBoard
------------------
In this step, we will process the Neuron profile data and launch TensorBoard.
5.1. Install the Neuron plugin for Tensorboard.
.. include:: /general/setup/install-templates/inf1/tensorboard-plugin-neuron-pip-install.rst
5.2. After collecting the raw profile data, we need to post-process it to create the
log files used by the Neuron plugin. This can be done when launching TensorBoard
by passing an extra flag ``--run_neuron_profiler``. Using this flag will create the
directory specified by ``--logdir`` and populate it with Neuron plugin data. Please
note that the ``NEURON_PROFILE`` environment variable set in Step 4.1 must still point
to the same directory as before.
.. code:: bash
tensorboard --logdir results --run_neuron_profiler
.. note::
If using TensorBoard >= 2.5, please use the ``--load_fast=false`` option when launching.
``tensorboard --logdir results --run_neuron_profiler --load_fast=false``
5.3. After you see the following message, TensorBoard is ready to use. By default,
TensorBoard will be launched at ``localhost:6006`` on the Deployment Instance.
::
...
Running neuron-profile
Serving TensorBoard on localhost; to expose to the network, use a proxy or pass --bind_all
TensorBoard 2.4.1 at http://localhost:6006/ (Press CTRL+C to quit)
View results in TensorBoard
---------------------------
In this step, we will view the Neuron plugin for TensorBoard from a browser on your local
development machine.
6.1. Connect to the Deployment Instance while enabling port forwarding. In this example, we
assume TensorBoard has been launched using the default address ``localhost:6006`` on the
Deployment Instance.
.. code:: bash
# if Ubuntu-based AMI
ssh -i <PEM key file> ubuntu@<instance DNS> -L 6006:localhost:6006
# if AL2-based AMI
ssh -i <PEM key file> ec2-user@<instance DNS> -L 6006:localhost:6006
6.2. In a browser, visit |tensorboard_address|.
6.3. In the top navigation bar, switch from ``Graphs`` to ``Neuron``. If it does not show up,
please wait a while and refresh the page while the plugin loads. If the issue persists, check
the ``Inactive`` dropdown list on the right and check for ``Neuron``.
|image1|
6.4. If TensorBoard failed to find the generated logs, you will see the following message:
|image10|
In this case, please check the console output on the Deployment Instance where TensorBoard was
launched for any warnings or error messages, and make sure the version of the ``aws-neuron-tools``
package is compatible.
.. _tensorboard-plugin-visualize-graph:
Visualize graphs executed on Neuron
-----------------------------------
.. _tensorboard-plugin-graph-device:
Show how the graph was partition to run on NeuronCores
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To view how the graph was partitioned to run on NeuronCores, select "Device" under "Graph Color
Schemes" in the left navigation bar.
|image2|
Each operator will be colored according to the device used. In this example, light blue indicates
an operator was executed on CPU, and orange indicates the operator was executed on NeuronCores.
Operators that are white may have been optimized by the Neuron compiler and fused into another
operation.
.. _tensorboard-plugin-graph-time:
Inspect which operators consumes the most time
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can also view how long each operator took by changing to the "Compute time" color scheme.
|image3|
This view will show time taken by each layer and will be colored according to how much relative
time the layer took to compute. A lighter shade of red means that a relatively small portion of
compute time was spent in this layer, while a darker red shows that more compute time was used.
.. _tensorboard-plugin-graph-supported-ops:
Check out Neuron support operators for each framework
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The "Compatibility" color scheme allows you to better understand what operators are currently
supported by the Neuron compiler - green for compatible ops, red for incompatible ops, and yellow
for subgraphs that contain both compatible and incompatible ops.
|image4|
.. _tensorboard-plugin-graph-filter-device:
Filter view by device
^^^^^^^^^^^^^^^^^^^^^
Additionally, you can choose to filter by CPU and NeuronCores, which will only color ops that
match the selected device(s).
|image5|
Expand/collapse subgraphs and view operator details
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each rectangular node in the graph represents a subgraph that can be expanded or collapse by
clicking on the name. Operators will be represented by ellipses, and can be clicked to reveal
more information on that operator, such as inputs and execution device.
|image11|
The ``Expand All`` and ``Collapse All`` buttons can be used to expand or collapse every subgraph.
When using these features, the positioning of the graph may change when redrawing the new graph.
Try using ``Reset Position`` button and zoom out by scrolling if the graph appears to be missing.
.. _tensorboard-plugin-view-profile:
Viewing the Neuron profile data
-------------------------------
On the right side of the Neuron plugin, information on the profiled inference will be displayed.
.. _tensorboard-plugin-profile-summary:
See performance summary
^^^^^^^^^^^^^^^^^^^^^^^
First is the "Neuron Performance Summary," which gives a quick overview on how Neuron executed the graph,
including information on the number of NeuronCores and both on-NeuronCore time and on-CPU time.
|image6|
.. _tensorboard-plugin-profile-nc:
Get a breakdown of time spent per NeuronCore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Next, the "Neuron Execution" will give more details on how a graph was partitioned for Neuron.
Each entry in the table will show the order it was executed in, what type of device was used, the compute
time (in microseconds), and the percentage of total time spent. To dive deeper into subgraphs, you can
check the "Show Details" box to display the breakdown per NeuronCore.
|image7|
.. _tensorboard-plugin-profile-op:
Get a breakdown of time spent per operator
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The "Op Time Table" section shows the cycle count per operator, much like the "Compute time" coloring
for graph visualization. This table can be sorted by clicking the column names, and searched using the
provided text box in the top right corner. Due to Neuron compiler optimizations, some of the compute may
not be associated with any specific operator and will be categorized as ``unknown``. Additionally, time
spent moving data to and from NeuronCores will fall under ``(ND_ENGINE_LOAD)``.
|image8|
.. |image1| image:: /images/tb-plugin-img1.png
:height: 2914
:width: 5344
:scale: 10%
.. |image2| image:: /images/tb-plugin-img2.png
:height: 2914
:width: 5344
:scale: 10%
.. |image3| image:: /images/tb-plugin-img3.png
:height: 2914
:width: 5344
:scale: 10%
.. |image4| image:: /images/tb-plugin-img4.png
:height: 2914
:width: 5344
:scale: 10%
.. |image5| image:: /images/tb-plugin-img5.png
:height: 2914
:width: 5344
:scale: 10%
.. |image6| image:: /images/tb-plugin-img6.png
:height: 2914
:width: 5344
:scale: 10%
.. |image7| image:: /images/tb-plugin-img7.png
:height: 2914
:width: 5344
:scale: 10%
.. |image8| image:: /images/tb-plugin-img8.png
:height: 2914
:width: 5344
:scale: 10%
.. |image9| image:: /images/tb-plugin-img9.png
:height: 2914
:width: 5344
:scale: 10%
.. |image10| image:: /images/tb-plugin-img10.png
:height: 2914
:width: 5344
:scale: 10%
.. |image11| image:: /images/tb-plugin-img11.png
:height: 2826
:width: 5341
:scale: 10%
.. _PyTorch-Neuron Resnet50 Tutorial: ../../src/examples/pytorch/resnet50.ipynb
.. |tensorboard_address| raw:: html
<a href="http://localhost:6006" target="_blank">localhost:6006</a>
</pre></body></html>
|
2023-09-29T20:55:00.785Z
|
|
Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/tools/tutorials/tutorial-neuron-monitor-mnist.html#track-system-monitor
|
# Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron — AWS Neuron Documentation
## Contents
- [Multi-layer Perceptron MNIST Model](#multi-layer-perceptron-mnist-model)
- [The Training Job](#the-training-job)
- [Setting up **Prometheus** and **Grafana**](#setting-up-prometheus-and-grafana)
- [Setting up **Prometheus**](#setting-up-prometheus)
- [Setting up **Grafana**](#setting-up-grafana)
- [Monitoring the Training Workload](#monitoring-the-training-workload)
_This document is relevant for_: `Inf2`, `Trn1`, `Trn1n`
## Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron[#](#track-system-resource-utilization-during-training-with-neuron-monitor-using-pytorch-neuron "Permalink to this headline")
Table of Contents
- [Multi-layer Perceptron MNIST Model](#multi-layer-perceptron-mnist-model)
- [The Training Job](#the-training-job)
- [Setting up **Prometheus** and **Grafana**](#setting-up-prometheus-and-grafana)
- [Setting up **Prometheus**](#setting-up-prometheus)
- [Setting up **Grafana**](#setting-up-grafana)
- [Monitoring the Training Workload](#monitoring-the-training-workload)
This tutorial explains how to monitor resource utilization using **neuron-monitor**, **Prometheus** and **Grafana** while running a multi-layer perceptron MNIST model on Trainium using PyTorch Neuron.
## [The Training Job](#id2)[#](#the-training-job "Permalink to this headline")
For this tutorial, we will make the original script do more work thus giving us more system utilization data to observe. The training loop is simply repeated 1000 times:
```
for run in range(0, 1000):
print(f'Run {run}')
model.train()
...
```
Save the following code as `train_monitor.py` and you can run it as `python3 train_monitor.py` on a Trn1 instance.
```
import os
import time
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision.datasets import mnist
from torch.optim import SGD
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor
# XLA imports
import torch_xla.core.xla_model as xm
# Declare 3-layer MLP for MNIST dataset
class MLP(nn.Module):
def __init__(self, input_size = 28 * 28, output_size = 10, layers = [120, 84]):
super(MLP, self).__init__()
self.fc1 = nn.Linear(input_size, layers[0])
self.fc2 = nn.Linear(layers[0], layers[1])
self.fc3 = nn.Linear(layers[1], output_size)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return F.log_softmax(x, dim=1)
# Load MNIST train dataset
train_dataset = mnist.MNIST(root='./MNIST_DATA_train', \
train=True, download=True, transform=ToTensor())
def main():
# Prepare data loader
train_loader = DataLoader(train_dataset, batch_size=32)
# Fix the random number generator seeds for reproducibility
torch.manual_seed(0)
# XLA: Specify XLA device (defaults to a NeuronCore on Trn1 instance)
device = 'xla'
# Move model to device and declare optimizer and loss function
model = MLP().to(device)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
loss_fn = torch.nn.NLLLoss()
# Run the training loop
print('----------Training ---------------')
for run in range(0, 1000):
print(f'Run {run}')
model.train()
for idx, (train_x, train_label) in enumerate(train_loader):
optimizer.zero_grad()
train_x = train_x.view(train_x.size(0), -1)
train_x = train_x.to(device)
train_label = train_label.to(device)
output = model(train_x)
loss = loss_fn(output, train_label)
loss.backward()
optimizer.step()
xm.mark_step() # XLA: collect ops and run them in XLA runtime
if idx < 2: # skip warmup iterations
start = time.time()
# Save checkpoint for evaluation
os.makedirs("checkpoints", exist_ok=True)
checkpoint = {'state_dict': model.state_dict()}
# XLA: use xm.save instead of torch.save to ensure states are moved back to cpu
# This can prevent "XRT memory handle not found" at end of test.py execution
xm.save(checkpoint,'checkpoints/checkpoint.pt')
print('----------End Training ---------------')
```
## [Setting up **Prometheus** and **Grafana**](#id3)[#](#setting-up-prometheus-and-grafana "Permalink to this headline")
Note
The setup presented in the following paragraphs can be extended to monitor any number of instances running training jobs or inference workloads. For this tutorial, we will set everything up on a single Trn1 instance running Amazon Linux 2.
### [Setting up **Prometheus**](#id4)[#](#setting-up-prometheus "Permalink to this headline")
For a more detailed guide on how to install **Prometheus** visit their official guide at [https://prometheus.io/docs/prometheus/latest/getting\_started/](https://prometheus.io/docs/prometheus/latest/getting_started/).
Download and unzip a prebuilt **Prometheus** binary on your Trn1 instance:
```
wget https://github.com/prometheus/prometheus/releases/download/v2.38.0/prometheus-2.38.0.linux-amd64.tar.gz
tar -xzvf prometheus-2.38.0.linux-amd64.tar.gz
cd prometheus-2.38.0.linux-amd64/
```
Create a config and add a scrape target:
```
scrape_configs:
- job_name: 'neuron'
# Scrape target every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:8000']
```
Finally, start **Prometheus**:
```
./prometheus --config.file=prometheus.yml
```
### [Setting up **Grafana**](#id5)[#](#setting-up-grafana "Permalink to this headline")
For a more detailed guide on how to install **Grafana** visit their official guide at [https://grafana.com/grafana/download](https://grafana.com/grafana/download).
Add the Grafana repo to yum:
```
sudo vim /etc/yum.repos.d/grafana.repo
[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
```
Install and start **Grafana**:
```
sudo yum install -y grafana
sudo /bin/systemctl start grafana-server.service
```
By default, **Grafana** will run a HTTP server on port 3000. If you need to change that, update its config and restart the service:
```
sudo vim /etc/grafana/grafana.ini
...
sudo /bin/systemctl start grafana-server.service
```
Using your favorite web browser, access the Grafana webpage and add a new dashboard.
The default user and password are both ‘admin’:

Next, you’ll add a Prometheus data source by going to `Configuration` -> `Data Sources`:

… and adding the local **Prometheus** server as a data source:

Finally, upload the sample dashboard `neuron-monitor-grafana.json` to **Grafana**:

## [Monitoring the Training Workload](#id6)[#](#monitoring-the-training-workload "Permalink to this headline")
Start the training job which, due to the artificially added complexity, will take more than 15 minutes:
On the same instance, start `neuron-monitor` and its companion script, `neuron-monitor-prometheus.py`:
```
neuron-monitor | neuron-monitor-prometheus.py
```
Once they are running, you can use your web browser, access the **Grafana** server running on your Trn1 instance and view a timeline of the system utilization.
The upper part of the dashboard contains:
- a list of the currently monitored instances (for this tutorial there is a single Trn1 instance)
- aggregated metrics for stats such as NeuronCore utilization, NeuronCores in use, iteration success rates, error rates etc.
- a timeline of execution status rates and execution latencies

The lower part of the dashboard contains: - one line of charts containing a timeline of Neuron resource utilization (NeuronCore, vCPU and memory utilization) - one line of charts containing a timeline of host resource utilization (vCPU and memory utilization)

_This document is relevant for_: `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../_static/pygments.css">
<link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script>
<script src="../../_static/jquery.js"></script>
<script src="../../_static/underscore.js"></script>
<script src="../../_static/doctools.js"></script>
<script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../_static/contentui.js"></script>
<script src="../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../genindex.html">
<link rel="search" title="Search" href="../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "tools/tutorials/tutorial-neuron-monitor-mnist", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/tools/tutorials/tutorial-neuron-monitor-mnist.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/tools/tutorials/tutorial-neuron-monitor-mnist.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//tools/tutorials/tutorial-neuron-monitor-mnist.rst">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Ftools/tutorials/tutorial-neuron-monitor-mnist.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/tools/tutorials/tutorial-neuron-monitor-mnist.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../_sources/tools/tutorials/tutorial-neuron-monitor-mnist.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#multi-layer-perceptron-mnist-model">
Multi-layer Perceptron MNIST Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#the-training-job">
The Training Job
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setting-up-prometheus-and-grafana">
Setting up
<strong>
Prometheus
</strong>
and
<strong>
Grafana
</strong>
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#setting-up-prometheus">
Setting up
<strong>
Prometheus
</strong>
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#setting-up-grafana">
Setting up
<strong>
Grafana
</strong>
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#monitoring-the-training-workload">
Monitoring the Training Workload
</a>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#multi-layer-perceptron-mnist-model">
Multi-layer Perceptron MNIST Model
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#the-training-job">
The Training Job
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setting-up-prometheus-and-grafana">
Setting up
<strong>
Prometheus
</strong>
and
<strong>
Grafana
</strong>
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#setting-up-prometheus">
Setting up
<strong>
Prometheus
</strong>
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#setting-up-grafana">
Setting up
<strong>
Grafana
</strong>
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#monitoring-the-training-workload">
Monitoring the Training Workload
</a>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="track-system-resource-utilization-during-training-with-neuron-monitor-using-pytorch-neuron">
<span id="track-system-monitor"></span><h1>Track System Resource Utilization during Training with neuron-monitor using PyTorch Neuron<a class="headerlink" href="#track-system-resource-utilization-during-training-with-neuron-monitor-using-pytorch-neuron" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of Contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#multi-layer-perceptron-mnist-model" id="id1">Multi-layer Perceptron MNIST Model</a></p></li>
<li><p><a class="reference internal" href="#the-training-job" id="id2">The Training Job</a></p></li>
<li><p><a class="reference internal" href="#setting-up-prometheus-and-grafana" id="id3">Setting up <strong>Prometheus</strong> and <strong>Grafana</strong></a></p>
<ul>
<li><p><a class="reference internal" href="#setting-up-prometheus" id="id4">Setting up <strong>Prometheus</strong></a></p></li>
<li><p><a class="reference internal" href="#setting-up-grafana" id="id5">Setting up <strong>Grafana</strong></a></p></li>
</ul>
</li>
<li><p><a class="reference internal" href="#monitoring-the-training-workload" id="id6">Monitoring the Training Workload</a></p></li>
</ul>
</div>
<p>This tutorial explains how to monitor resource utilization using <strong>neuron-monitor</strong>, <strong>Prometheus</strong> and <strong>Grafana</strong> while running a multi-layer
perceptron MNIST model on Trainium using PyTorch Neuron.</p>
<div class="section" id="multi-layer-perceptron-mnist-model">
<h2><a class="toc-backref" href="#id1">Multi-layer Perceptron MNIST Model</a><a class="headerlink" href="#multi-layer-perceptron-mnist-model" title="Permalink to this headline">#</a></h2>
<p>This tutorial is based on the MNIST example for PyTorch Neuron on Trainium.
For the full tutorial, please see <a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html#neuronx-mlp-training-tutorial"><span class="std std-ref">Multi-Layer Perceptron Training Tutorial</span></a>.</p>
</div>
<div class="section" id="the-training-job">
<h2><a class="toc-backref" href="#id2">The Training Job</a><a class="headerlink" href="#the-training-job" title="Permalink to this headline">#</a></h2>
<p>For this tutorial, we will make the original script do more work thus giving us more system utilization data to observe. The training
loop is simply repeated 1000 times:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="k">for</span> <span class="n">run</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1000</span><span class="p">):</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Run </span><span class="si">{</span><span class="n">run</span><span class="si">}</span><span class="s1">'</span><span class="p">)</span>
<span class="n">model</span><span class="o">.</span><span class="n">train</span><span class="p">()</span>
<span class="o">...</span>
</pre></div>
</div>
<p>Save the following code as <code class="xref download docutils literal notranslate"><span class="pre">train_monitor.py</span></code> and you can run it as
<code class="docutils literal notranslate"><span class="pre">python3</span> <span class="pre">train_monitor.py</span></code> on a Trn1 instance.</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch.nn</span> <span class="k">as</span> <span class="nn">nn</span>
<span class="kn">import</span> <span class="nn">torch.nn.functional</span> <span class="k">as</span> <span class="nn">F</span>
<span class="kn">from</span> <span class="nn">torchvision.datasets</span> <span class="kn">import</span> <span class="n">mnist</span>
<span class="kn">from</span> <span class="nn">torch.optim</span> <span class="kn">import</span> <span class="n">SGD</span>
<span class="kn">from</span> <span class="nn">torch.utils.data</span> <span class="kn">import</span> <span class="n">DataLoader</span>
<span class="kn">from</span> <span class="nn">torchvision.transforms</span> <span class="kn">import</span> <span class="n">ToTensor</span>
<span class="c1"># XLA imports</span>
<span class="kn">import</span> <span class="nn">torch_xla.core.xla_model</span> <span class="k">as</span> <span class="nn">xm</span>
<span class="c1"># Declare 3-layer MLP for MNIST dataset</span>
<span class="k">class</span> <span class="nc">MLP</span><span class="p">(</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_size</span> <span class="o">=</span> <span class="mi">28</span> <span class="o">*</span> <span class="mi">28</span><span class="p">,</span> <span class="n">output_size</span> <span class="o">=</span> <span class="mi">10</span><span class="p">,</span> <span class="n">layers</span> <span class="o">=</span> <span class="p">[</span><span class="mi">120</span><span class="p">,</span> <span class="mi">84</span><span class="p">]):</span>
<span class="nb">super</span><span class="p">(</span><span class="n">MLP</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">fc1</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="n">input_size</span><span class="p">,</span> <span class="n">layers</span><span class="p">[</span><span class="mi">0</span><span class="p">])</span>
<span class="bp">self</span><span class="o">.</span><span class="n">fc2</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="n">layers</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">layers</span><span class="p">[</span><span class="mi">1</span><span class="p">])</span>
<span class="bp">self</span><span class="o">.</span><span class="n">fc3</span> <span class="o">=</span> <span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="n">layers</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="n">output_size</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">fc1</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">F</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">fc2</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">fc3</span><span class="p">(</span><span class="n">x</span><span class="p">)</span>
<span class="k">return</span> <span class="n">F</span><span class="o">.</span><span class="n">log_softmax</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">dim</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="c1"># Load MNIST train dataset</span>
<span class="n">train_dataset</span> <span class="o">=</span> <span class="n">mnist</span><span class="o">.</span><span class="n">MNIST</span><span class="p">(</span><span class="n">root</span><span class="o">=</span><span class="s1">'./MNIST_DATA_train'</span><span class="p">,</span> \
<span class="n">train</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">download</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span> <span class="n">transform</span><span class="o">=</span><span class="n">ToTensor</span><span class="p">())</span>
<span class="k">def</span> <span class="nf">main</span><span class="p">():</span>
<span class="c1"># Prepare data loader</span>
<span class="n">train_loader</span> <span class="o">=</span> <span class="n">DataLoader</span><span class="p">(</span><span class="n">train_dataset</span><span class="p">,</span> <span class="n">batch_size</span><span class="o">=</span><span class="mi">32</span><span class="p">)</span>
<span class="c1"># Fix the random number generator seeds for reproducibility</span>
<span class="n">torch</span><span class="o">.</span><span class="n">manual_seed</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span>
<span class="c1"># XLA: Specify XLA device (defaults to a NeuronCore on Trn1 instance)</span>
<span class="n">device</span> <span class="o">=</span> <span class="s1">'xla'</span>
<span class="c1"># Move model to device and declare optimizer and loss function</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">MLP</span><span class="p">()</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="n">optimizer</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">optim</span><span class="o">.</span><span class="n">SGD</span><span class="p">(</span><span class="n">model</span><span class="o">.</span><span class="n">parameters</span><span class="p">(),</span> <span class="n">lr</span><span class="o">=</span><span class="mf">0.01</span><span class="p">)</span>
<span class="n">loss_fn</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">NLLLoss</span><span class="p">()</span>
<span class="c1"># Run the training loop</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'----------Training ---------------'</span><span class="p">)</span>
<span class="k">for</span> <span class="n">run</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1000</span><span class="p">):</span>
<span class="nb">print</span><span class="p">(</span><span class="sa">f</span><span class="s1">'Run </span><span class="si">{</span><span class="n">run</span><span class="si">}</span><span class="s1">'</span><span class="p">)</span>
<span class="n">model</span><span class="o">.</span><span class="n">train</span><span class="p">()</span>
<span class="k">for</span> <span class="n">idx</span><span class="p">,</span> <span class="p">(</span><span class="n">train_x</span><span class="p">,</span> <span class="n">train_label</span><span class="p">)</span> <span class="ow">in</span> <span class="nb">enumerate</span><span class="p">(</span><span class="n">train_loader</span><span class="p">):</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">zero_grad</span><span class="p">()</span>
<span class="n">train_x</span> <span class="o">=</span> <span class="n">train_x</span><span class="o">.</span><span class="n">view</span><span class="p">(</span><span class="n">train_x</span><span class="o">.</span><span class="n">size</span><span class="p">(</span><span class="mi">0</span><span class="p">),</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span>
<span class="n">train_x</span> <span class="o">=</span> <span class="n">train_x</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="n">train_label</span> <span class="o">=</span> <span class="n">train_label</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">train_x</span><span class="p">)</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">loss_fn</span><span class="p">(</span><span class="n">output</span><span class="p">,</span> <span class="n">train_label</span><span class="p">)</span>
<span class="n">loss</span><span class="o">.</span><span class="n">backward</span><span class="p">()</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">step</span><span class="p">()</span>
<span class="n">xm</span><span class="o">.</span><span class="n">mark_step</span><span class="p">()</span> <span class="c1"># XLA: collect ops and run them in XLA runtime</span>
<span class="k">if</span> <span class="n">idx</span> <span class="o"><</span> <span class="mi">2</span><span class="p">:</span> <span class="c1"># skip warmup iterations</span>
<span class="n">start</span> <span class="o">=</span> <span class="n">time</span><span class="o">.</span><span class="n">time</span><span class="p">()</span>
<span class="c1"># Save checkpoint for evaluation</span>
<span class="n">os</span><span class="o">.</span><span class="n">makedirs</span><span class="p">(</span><span class="s2">"checkpoints"</span><span class="p">,</span> <span class="n">exist_ok</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
<span class="n">checkpoint</span> <span class="o">=</span> <span class="p">{</span><span class="s1">'state_dict'</span><span class="p">:</span> <span class="n">model</span><span class="o">.</span><span class="n">state_dict</span><span class="p">()}</span>
<span class="c1"># XLA: use xm.save instead of torch.save to ensure states are moved back to cpu</span>
<span class="c1"># This can prevent "XRT memory handle not found" at end of test.py execution</span>
<span class="n">xm</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="n">checkpoint</span><span class="p">,</span><span class="s1">'checkpoints/checkpoint.pt'</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s1">'----------End Training ---------------'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="setting-up-prometheus-and-grafana">
<h2><a class="toc-backref" href="#id3">Setting up <strong>Prometheus</strong> and <strong>Grafana</strong></a><a class="headerlink" href="#setting-up-prometheus-and-grafana" title="Permalink to this headline">#</a></h2>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>The setup presented in the following paragraphs can be extended to monitor any number of instances running training jobs or
inference workloads. For this tutorial, we will set everything up on a single Trn1 instance running Amazon Linux 2.</p>
</div>
<div class="section" id="setting-up-prometheus">
<h3><a class="toc-backref" href="#id4">Setting up <strong>Prometheus</strong></a><a class="headerlink" href="#setting-up-prometheus" title="Permalink to this headline">#</a></h3>
<p>For a more detailed guide on how to install <strong>Prometheus</strong> visit their official guide at <a class="reference external" href="https://prometheus.io/docs/prometheus/latest/getting_started/">https://prometheus.io/docs/prometheus/latest/getting_started/</a>.</p>
<p>Download and unzip a prebuilt <strong>Prometheus</strong> binary on your Trn1 instance:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>wget<span class="w"> </span>https://github.com/prometheus/prometheus/releases/download/v2.38.0/prometheus-2.38.0.linux-amd64.tar.gz
tar<span class="w"> </span>-xzvf<span class="w"> </span>prometheus-2.38.0.linux-amd64.tar.gz
<span class="nb">cd</span><span class="w"> </span>prometheus-2.38.0.linux-amd64/
</pre></div>
</div>
<p>Create a config and add a scrape target:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>vim<span class="w"> </span>prometheus.yml
</pre></div>
</div>
<div class="highlight-yml notranslate"><div class="highlight"><pre><span></span>scrape_configs:
- job_name: 'neuron'
# Scrape target every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:8000']
</pre></div>
</div>
<p>Finally, start <strong>Prometheus</strong>:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>./prometheus<span class="w"> </span>--config.file<span class="o">=</span>prometheus.yml
</pre></div>
</div>
</div>
<div class="section" id="setting-up-grafana">
<h3><a class="toc-backref" href="#id5">Setting up <strong>Grafana</strong></a><a class="headerlink" href="#setting-up-grafana" title="Permalink to this headline">#</a></h3>
<p>For a more detailed guide on how to install <strong>Grafana</strong> visit their official guide at <a class="reference external" href="https://grafana.com/grafana/download">https://grafana.com/grafana/download</a>.</p>
<p>Add the Grafana repo to yum:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>vim<span class="w"> </span>/etc/yum.repos.d/grafana.repo
<span class="o">[</span>grafana<span class="o">]</span>
<span class="nv">name</span><span class="o">=</span>grafana
<span class="nv">baseurl</span><span class="o">=</span>https://packages.grafana.com/oss/rpm
<span class="nv">repo_gpgcheck</span><span class="o">=</span><span class="m">1</span>
<span class="nv">enabled</span><span class="o">=</span><span class="m">1</span>
<span class="nv">gpgcheck</span><span class="o">=</span><span class="m">1</span>
<span class="nv">gpgkey</span><span class="o">=</span>https://packages.grafana.com/gpg.key
<span class="nv">sslverify</span><span class="o">=</span><span class="m">1</span>
<span class="nv">sslcacert</span><span class="o">=</span>/etc/pki/tls/certs/ca-bundle.crt
</pre></div>
</div>
<p>Install and start <strong>Grafana</strong>:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>yum<span class="w"> </span>install<span class="w"> </span>-y<span class="w"> </span>grafana
sudo<span class="w"> </span>/bin/systemctl<span class="w"> </span>start<span class="w"> </span>grafana-server.service
</pre></div>
</div>
<p>By default, <strong>Grafana</strong> will run a HTTP server on port 3000. If you need to change that, update its config and restart the service:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sudo<span class="w"> </span>vim<span class="w"> </span>/etc/grafana/grafana.ini
...
sudo<span class="w"> </span>/bin/systemctl<span class="w"> </span>start<span class="w"> </span>grafana-server.service
</pre></div>
</div>
<p>Using your favorite web browser, access the Grafana webpage and add a new dashboard.</p>
<p>The default user and password are both ‘admin’:</p>
<img alt="Image: image.png" src="../../_images/tutorial_grafana_login.png">
<p>Next, you’ll add a Prometheus data source by going to <code class="docutils literal notranslate"><span class="pre">Configuration</span></code> -> <code class="docutils literal notranslate"><span class="pre">Data</span> <span class="pre">Sources</span></code>:</p>
<img alt="Image: image.png" src="../../_images/tutorial_grafana_data_sources.png">
<p>… and adding the local <strong>Prometheus</strong> server as a data source:</p>
<img alt="Image: image.png" src="../../_images/tutorial_grafana_add_prometheus.png">
<p>Finally, upload the sample dashboard <code class="xref download docutils literal notranslate"><span class="pre">neuron-monitor-grafana.json</span></code>
to <strong>Grafana</strong>:</p>
<img alt="Image: image.png" src="../../_images/tutorial_grafana_upload_dash.png">
</div>
</div>
<div class="section" id="monitoring-the-training-workload">
<h2><a class="toc-backref" href="#id6">Monitoring the Training Workload</a><a class="headerlink" href="#monitoring-the-training-workload" title="Permalink to this headline">#</a></h2>
<p>Start the training job which, due to the artificially added complexity, will take more than 15 minutes:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>python<span class="w"> </span>train_monitor.py
</pre></div>
</div>
<p>On the same instance, start <code class="docutils literal notranslate"><span class="pre">neuron-monitor</span></code> and its companion script, <code class="docutils literal notranslate"><span class="pre">neuron-monitor-prometheus.py</span></code>:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>neuron-monitor<span class="w"> </span><span class="p">|</span><span class="w"> </span>neuron-monitor-prometheus.py
</pre></div>
</div>
<p>Once they are running, you can use your web browser, access the <strong>Grafana</strong> server running on your Trn1 instance and
view a timeline of the system utilization.</p>
<dl class="simple">
<dt>The upper part of the dashboard contains:</dt><dd><ul class="simple">
<li><p>a list of the currently monitored instances (for this tutorial there is a single Trn1 instance)</p></li>
<li><p>aggregated metrics for stats such as NeuronCore utilization, NeuronCores in use, iteration success rates, error rates etc.</p></li>
<li><p>a timeline of execution status rates and execution latencies</p></li>
</ul>
</dd>
</dl>
<img alt="Image: image.png" src="../../_images/tutorial_grafana_dash_1.png">
<p>The lower part of the dashboard contains:
- one line of charts containing a timeline of Neuron resource utilization (NeuronCore, vCPU and memory utilization)
- one line of charts containing a timeline of host resource utilization (vCPU and memory utilization)</p>
<img alt="Image: image.png" src="../../_images/tutorial_grafana_dash_2.png">
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:55:01.155Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_examples.rst.txt
|
```
.. _neuronperf_examples:
NeuronPerf Examples
===================
This page walks through several examples of using NeuronPerf, starting with the simplest way---using a compiled model. We will also see how we can use NeuronPerf to perform a hyperparameter search, and manage the artifacts produced, as well as our results.
Benchmark a Compiled Model
--------------------------
This example assumes you have already compiled your model for Neuron and saved it to disk.
You will need to adapt the batch size, input shape, and filename for your model.
.. code:: python
import torch # or tensorflow, mxnet
import neuronperf as npf
import neuronperf.torch # or tensorflow, mxnet
# Construct dummy inputs
batch_sizes = 1
input_shape = (batch_sizes, 3, 224, 224)
inputs = torch.ones(input_shape) # or numpy array for TF, MX
# Benchmark and save results
reports = npf.torch.benchmark("your_model_file.pt", inputs, batch_sizes)
npf.print_reports(reports)
npf.write_json(reports)
.. code:: bash
INFO:neuronperf.benchmarking - Benchmarking 'your_model_file.pt', ~8.0 minutes remaining.
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
296766.5 0.003 0.003 1 1 1 1 your_model_file.pt
3616109.75 0.005 0.008 24 1 1 1 your_model_file.pt
56801.0 0.035 0.04 1 1 2 1 your_model_file.pt
3094419.4 0.005 0.051 24 1 2 1 your_model_file.pt
Let's suppose you only wish to test two specific configurations. You wish to benchmark 1 model and 1 worker thread, and also with 2 worker threads for 15 seconds each. The call to ``benchmark`` becomes:
.. code:: python
reports = npf.torch.benchmark(filename, inputs, batch_sizes, n_models=1, workers_per_model=[1, 2], duration=15)
You can also add a custom model name to reports.
.. code:: python
reports = npf.torch.benchmark(..., model_name="MyFancyModel")
See the :ref:`neuronperf_benchmark_guide` for further details.
Benchmark a Model from Source
-----------------------------
In this example, we define, compile, and benchmark a simple (dummy) model using PyTorch.
We'll assume you already have a PyTorch model compiled for Neuron with the filename ``model_neuron_b1.pt``. Furthermore, let's assume the model was traced with a batch size of 1, and has an input shape of (3, 224, 224).
.. literalinclude:: test_simple_pt.py
:language: python
:caption: :download:`test_simple_pt.py <test_simple_pt.py>`
:linenos:
.. code:: bash
(aws_neuron_pytorch_p36) ubuntu@ip-172-31-11-122:~/tmp$ python test_simple_pt.py
INFO:neuronperf.benchmarking - Benchmarking 'model_neuron_b1.pt', ~8.0 minutes remaining.
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
296766.5 0.003 0.003 1 1 1 1 model_neuron_b1.pt
3616109.75 0.005 0.008 24 1 1 1 model_neuron_b1.pt
56801.0 0.035 0.04 1 1 2 1 model_neuron_b1.pt
3094419.4 0.005 0.051 24 1 2 1 model_neuron_b1.pt
Great! Here is what a default ``csv`` file looks like.
.. df-table::
:header-rows: 1
df = pd.read_csv('model_neuron_b1.csv')
Compile and Benchmark a Model
-----------------------------
Here is an end-to-end example of compiling and benchmarking a ResNet-50 model from ``torchvision``.
.. literalinclude:: test_resnet50_pt.py
:language: python
:caption: :download:`test_resnet50_pt.py <test_resnet50_pt.py>`
:linenos:
Benchmark on CPU or GPU
-----------------------
When benchmarking on CPU or GPU, the API is slightly different. With CPU or GPU, there is no compiled model to benchmark, so instead we need to directly pass a reference to the model class that will be instantiated.
.. note::
GPU benchmarking is currently only available for PyTorch.
CPU:
.. code:: python
cpu_reports = npf.cpu.benchmark(YourModelClass, ...)
GPU:
.. code:: python
gpu_reports = npf.torch.benchmark(YourModelClass, ..., device_type="gpu")
Please refer to :ref:`npf-cpu-gpu` for details and an example of providing your model class.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_examples:
NeuronPerf Examples
===================
This page walks through several examples of using NeuronPerf, starting with the simplest way---using a compiled model. We will also see how we can use NeuronPerf to perform a hyperparameter search, and manage the artifacts produced, as well as our results.
Benchmark a Compiled Model
--------------------------
This example assumes you have already compiled your model for Neuron and saved it to disk.
You will need to adapt the batch size, input shape, and filename for your model.
.. code:: python
import torch # or tensorflow, mxnet
import neuronperf as npf
import neuronperf.torch # or tensorflow, mxnet
# Construct dummy inputs
batch_sizes = 1
input_shape = (batch_sizes, 3, 224, 224)
inputs = torch.ones(input_shape) # or numpy array for TF, MX
# Benchmark and save results
reports = npf.torch.benchmark("your_model_file.pt", inputs, batch_sizes)
npf.print_reports(reports)
npf.write_json(reports)
.. code:: bash
INFO:neuronperf.benchmarking - Benchmarking 'your_model_file.pt', ~8.0 minutes remaining.
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
296766.5 0.003 0.003 1 1 1 1 your_model_file.pt
3616109.75 0.005 0.008 24 1 1 1 your_model_file.pt
56801.0 0.035 0.04 1 1 2 1 your_model_file.pt
3094419.4 0.005 0.051 24 1 2 1 your_model_file.pt
Let's suppose you only wish to test two specific configurations. You wish to benchmark 1 model and 1 worker thread, and also with 2 worker threads for 15 seconds each. The call to ``benchmark`` becomes:
.. code:: python
reports = npf.torch.benchmark(filename, inputs, batch_sizes, n_models=1, workers_per_model=[1, 2], duration=15)
You can also add a custom model name to reports.
.. code:: python
reports = npf.torch.benchmark(..., model_name="MyFancyModel")
See the :ref:`neuronperf_benchmark_guide` for further details.
Benchmark a Model from Source
-----------------------------
In this example, we define, compile, and benchmark a simple (dummy) model using PyTorch.
We'll assume you already have a PyTorch model compiled for Neuron with the filename ``model_neuron_b1.pt``. Furthermore, let's assume the model was traced with a batch size of 1, and has an input shape of (3, 224, 224).
.. literalinclude:: test_simple_pt.py
:language: python
:caption: :download:`test_simple_pt.py <test_simple_pt.py>`
:linenos:
.. code:: bash
(aws_neuron_pytorch_p36) ubuntu@ip-172-31-11-122:~/tmp$ python test_simple_pt.py
INFO:neuronperf.benchmarking - Benchmarking 'model_neuron_b1.pt', ~8.0 minutes remaining.
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
296766.5 0.003 0.003 1 1 1 1 model_neuron_b1.pt
3616109.75 0.005 0.008 24 1 1 1 model_neuron_b1.pt
56801.0 0.035 0.04 1 1 2 1 model_neuron_b1.pt
3094419.4 0.005 0.051 24 1 2 1 model_neuron_b1.pt
Great! Here is what a default ``csv`` file looks like.
.. df-table::
:header-rows: 1
df = pd.read_csv('model_neuron_b1.csv')
Compile and Benchmark a Model
-----------------------------
Here is an end-to-end example of compiling and benchmarking a ResNet-50 model from ``torchvision``.
.. literalinclude:: test_resnet50_pt.py
:language: python
:caption: :download:`test_resnet50_pt.py <test_resnet50_pt.py>`
:linenos:
Benchmark on CPU or GPU
-----------------------
When benchmarking on CPU or GPU, the API is slightly different. With CPU or GPU, there is no compiled model to benchmark, so instead we need to directly pass a reference to the model class that will be instantiated.
.. note::
GPU benchmarking is currently only available for PyTorch.
CPU:
.. code:: python
cpu_reports = npf.cpu.benchmark(YourModelClass, ...)
GPU:
.. code:: python
gpu_reports = npf.torch.benchmark(YourModelClass, ..., device_type="gpu")
Please refer to :ref:`npf-cpu-gpu` for details and an example of providing your model class.
</pre></body></html>
|
2023-09-29T20:55:01.248Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_overview.rst.txt
|
```
.. _neuronperf_overview:
===================
NeuronPerf Overview
===================
NeuronPerf is a lightweight Python library that can help you easily benchmark your models with Neuron hardware.
NeuronPerf supports Neuron releases for PyTorch, Tensorflow, and MXNet. It is used internally by the Neuron team to generate performance benchmarking numbers.
When interacting with NeuronPerf, you will typically import the base package along with one of the submodule wrappers, for example:
.. code:: python
import neuronperf
import neuronperf.torch
You may then benchmark and/or compile one or more models with NeuronPerf. For example,
.. code:: python
reports = neuronperf.torch.benchmark(model, inputs, ...)
The ``compile`` and ``benchmark`` methods must be accessed through one of the supported framework submodules.
Benchmarking
============
All NeuronPerf ``benchmark`` calls require a minimum of two arguments:
1. A filename
2. Inputs
The filename may refer to:
1. A Neuron-compiled model (e.g. ``my_model.pt``)
2. A :ref:`Model Index <neuronperf_model_index_guide>`.
A Model Index is useful for benchmarking more than one model in a single session.
Compiling
=========
NeuronPerf also provides a standard interface to all Neuron frameworks through the ``compile`` API.
.. code:: python
model_index = neuronperf.torch.compile(model, inputs, ...)
This is completely optional. You may use the standard compilation guides for supported frameworks.
Next Steps
==========
Take a look at the simple :ref:`neuronperf_examples`, :ref:`neuronperf_benchmark_guide`, :ref:`neuronperf_compile_guide`, and :ref:`neuronperf_api`.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_overview:
===================
NeuronPerf Overview
===================
NeuronPerf is a lightweight Python library that can help you easily benchmark your models with Neuron hardware.
NeuronPerf supports Neuron releases for PyTorch, Tensorflow, and MXNet. It is used internally by the Neuron team to generate performance benchmarking numbers.
When interacting with NeuronPerf, you will typically import the base package along with one of the submodule wrappers, for example:
.. code:: python
import neuronperf
import neuronperf.torch
You may then benchmark and/or compile one or more models with NeuronPerf. For example,
.. code:: python
reports = neuronperf.torch.benchmark(model, inputs, ...)
The ``compile`` and ``benchmark`` methods must be accessed through one of the supported framework submodules.
Benchmarking
============
All NeuronPerf ``benchmark`` calls require a minimum of two arguments:
1. A filename
2. Inputs
The filename may refer to:
1. A Neuron-compiled model (e.g. ``my_model.pt``)
2. A :ref:`Model Index <neuronperf_model_index_guide>`.
A Model Index is useful for benchmarking more than one model in a single session.
Compiling
=========
NeuronPerf also provides a standard interface to all Neuron frameworks through the ``compile`` API.
.. code:: python
model_index = neuronperf.torch.compile(model, inputs, ...)
This is completely optional. You may use the standard compilation guides for supported frameworks.
Next Steps
==========
Take a look at the simple :ref:`neuronperf_examples`, :ref:`neuronperf_benchmark_guide`, :ref:`neuronperf_compile_guide`, and :ref:`neuronperf_api`.</pre></body></html>
|
2023-09-29T20:55:01.516Z
|
|
Profiling PyTorch Neuron (torch-neuronx) with TensorBoard — AWS Neuron Documentation
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/tools/tutorials/torch-neuronx-profiling-with-tb.html#torch-neuronx-profiling-with-tb
|
# Profiling PyTorch Neuron (torch-neuronx) with TensorBoard — AWS Neuron Documentation
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
## Profiling PyTorch Neuron (`torch-neuronx`) with TensorBoard[#](#profiling-pytorch-neuron-torch-neuronx-with-tensorboard "Permalink to this headline")
Table of Contents
- [Introduction](#introduction)
- [Setup](#setup)
- [Prerequisites](#prerequisites)
- [Environment](#environment)
- [Part 1: Operator Level Trace for `xm.markstep()` workflow](#part-1-operator-level-trace-for-xm-markstep-workflow)
- [Goal](#goal)
- [Set Up](#set-up)
- [Understanding the Code](#understanding-the-code)
- [Running The Profiler](#running-the-profiler)
- [Loading the Operators Level Trace in TensorBoard](#loading-the-operators-level-trace-in-tensorboard)
- [Operator Framework View](#operator-framework-view)
- [Operator HLO View](#operator-hlo-view)
- [Operator Trace View](#operator-trace-view)
- [Understanding the Low Level Timeline](#understanding-the-low-level-timeline)
- [Part 2: Operator Level Trace with `torch_neuronx.trace()` workflow](#part-2-operator-level-trace-with-torch-neuronx-trace-workflow)
- [Set Up](#id2)
- [Important code differences from Part 1](#important-code-differences-from-part-1)
- [Running Part 2](#running-part-2)
- [Loading the Operators Level Trace in TensorBoard](#id3)
- [Notable Differences in Timeline View from Part 1:](#notable-differences-in-timeline-view-from-part-1)
## [Introduction](#id4)[#](#introduction "Permalink to this headline")
Neuron provides a plugin for TensorBoard that allows users to measure and visualize performance on a torch runtime level or an operator level. With this information, it becomes quicker to identify any performance bottleneck allowing for quicker addressing of that issue.
For more information on the Neuron plugin for TensorBoard, see [Neuron Plugin for TensorBoard (Trn1)](../tensorboard/getting-started-tensorboard-neuronx-plugin.html#neuronx-plugin-tensorboard).
## [Setup](#id5)[#](#setup "Permalink to this headline")
### [Environment](#id7)[#](#environment "Permalink to this headline")
```
#activate python virtual environment and install tensorboard_plugin_neuron
source ~/aws_neuron_venv_pytorch_p38/bin/activate
pip install tensorboard_plugin_neuronx
#create work directory for the Neuron Profiling tutorials
mkdir -p ~/neuron_profiling_tensorboard_examples
cd ~/neuron_profiling_tensorboard_examples
```
## [Part 1: Operator Level Trace for `xm.markstep()` workflow](#id8)[#](#part-1-operator-level-trace-for-xm-markstep-workflow "Permalink to this headline")
### [Goal](#id9)[#](#goal "Permalink to this headline")
After completing this tutorial, the user should be able to understand the features of the Operator Level Trace. The user should also be able to form a narrative/surface level analysis from what is being presented in the Operator Level Trace.
### [Set Up](#id10)[#](#set-up "Permalink to this headline")
Let’s set up a directory containing the material for this demo
```
cd ~/neuron_profiling_tensorboard_examples
mkdir tutorial_1
cd tutorial_1
# this is where our code will be written
touch run.py
```
Here is the code for `run.py`:
```
import os
import torch
import torch_neuronx
from torch_neuronx.experimental import profiler
import torch_xla.core.xla_model as xm
os.environ["NEURON_CC_FLAGS"] = "--cache_dir=./compiler_cache"
device = xm.xla_device()
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
with torch.no_grad():
model = NN()
inp = torch.rand(4,4)
output = model(inp)
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='operator',
ms_duration=10000 ):
# IMPORTANT: the model has to be transferred to XLA within
# the context manager, otherwise profiling won't work
neuron_model = model.to(device)
neuron_inp = inp.to(device)
output_neuron = neuron_model(neuron_inp)
xm.mark_step()
print("==CPU OUTPUT==")
print(output)
print()
print("==TRN1 OUTPUT==")
print(output_neuron)
```
### [Understanding the Code](#id11)[#](#understanding-the-code "Permalink to this headline")
For this first tutorial, we’ll be using a simple Feed forward NN model. However, once the TensorBoard dashboard is up, we’ll see some interesting and unexpected things. A simple model is helpful since it is easy to reference back to.
Another important part is the “operator” profiling type we specified in the context manager.
**Low Level:** The “operator“ dashboard is the dashboard that contains the Operator Level Trace This view also only zooms in on the NeuronDevice, while the ”trace“ dashboard shows processes from all devices. The Operator Level Trace View is organized by levels of abstraction, with the top level showing the model class. The next lower tier shows model components, and the lowest tier shows specific operators that occur for a specific model component. This view is useful for identifying model bottlenecks at the operator level.
We also print out the outputs from the CPU model and the TRN1 model to note the small differences in output.
### [Running The Profiler](#id12)[#](#running-the-profiler "Permalink to this headline")
**Output:**
Initial Output & Compilation Success
```
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Dependency reduction of sg0000
0% 10 20 30 40 50 60 70 80 90 100%``
|----|----|----|----|----|----|----|----|----|----|
***************************************************
```
Processing the Neuron Profiler Traces
```
torch_neuron: Waiting for XLA profile completion ...
torch_neuron: translate_xplane: Processing plane: '/host:CPU'
torch_neuron: XLA decode - Read filename 2023_04_28_00_54_04
torch_neuron: XLA decode - Read date parts ['2023', '04', '28', '00', '54', '04']
torch_neuron: XLA decode - Read start date 2023-04-28 00:54:04 from directory stamp
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline_split.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_hlo_op.json'
torch_neuron: translate_xplane: Processing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json'
torch_neuron: translate_xplane: Writing plane: '/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json' to 'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_framework_op.json'
```
Printing output from CPU model and Trn1 Model:
```
==CPU OUTPUT==
tensor([[-0.1396, -0.3266],
[-0.0327, -0.3105],
[-0.0073, -0.3268],
[-0.1683, -0.3230]])
==TRN1 OUTPUT==
tensor([[-0.1396, -0.3266],
[-0.0328, -0.3106],
[-0.0067, -0.3270],
[-0.1684, -0.3229]], device='xla:1')
```
### [Loading the Operators Level Trace in TensorBoard](#id13)[#](#loading-the-operators-level-trace-in-tensorboard "Permalink to this headline")
Run `tensorboard --load_fast=false --logdir logs/`
Take note of the port (usually 6006) and enter `localhost:<port>` into the local browser (assuming port forwarding is set up properly)
The Operator Level Trace views are the same format plus an id at the end; `year_month_day_hour_minute_second_millisecond_id`. The Tool dropdown will have 3 options: operator-framework, operator-hlo, and operator-timeline.
### [Operator Framework View](#id14)[#](#operator-framework-view "Permalink to this headline")

This view contains a pie-chart displaying the proportional execution time for each of the model operators on the framework level for a neuron device. The list of operators is shown in the bottom along with other details about number of occurrences, execution time and neuron device and core.
### [Operator HLO View](#id15)[#](#operator-hlo-view "Permalink to this headline")

This view contains a pie-chart displaying the proportional execution time for each of the model operators on the hlo level for a Neuron device. The list of operators is shown in the bottom along with other details about number of occurrences, execution time and neuron device and core.
Note
For this simple model, the pie chart will be the same as the framework view. This won’t be the case for larger and more complex models.
### [Operator Trace View](#id16)[#](#operator-trace-view "Permalink to this headline")

#### Trace View Sections[#](#trace-view-sections "Permalink to this headline")
Notice there are four sections: Process Overview, Control, Execution, and Data Transfer. In each section there are more subdivisions with each layer representing a certain level of abstraction. Also important to note that the timescale axis is aligned between the two sections. This is important to note as sometimes there are gaps in the process execution. Most of the time, there are data transfer operations happening in between the gaps.
#### Fusion Operators[#](#fusion-operators "Permalink to this headline")
**Simple Case:** Zooming in on the operations, we can recognize some operations for a neural network, such as a dot product and transpose, but sometimes there will be fused operators (fusion operators). To understand these operators, click on it, and on the bottom of the dashboard, some information will appear.

Notice in the above example the fusion operator is fusing the operator before and after itself on the timeline. More specifically, `fused_3` is a fusion of `NN[model]/input` and `NN[model]/ReLU[nl1]/Tensor_1/aten__relu_maximum`. These kinds of fusions occur when the `neuronx-cc` compiler has found an optimization relating to the two operators. Most often this would be the execution of the operators on separate compute engines or another form of parallelism.
**Complex Case:** Most often, the order of fusion operators can get a little complicated or contain “hidden” information. For the first example, let’s zoom into the data transfer section such that we see the timescale range from 6000 ns. to 6600 ns. It should look similar to below:

Looking at `fused_16` (11452 ns) we see it’s surrounded by other fused operators. Furthermore, the `fused_16` operator fuses more than two operators: `NN[model]/Linear[layer1]/aten__addmm_add`, `NN[model]/input`, and `NN[model]/Linear[layer1]/aten__addmm_dot`. These operators can be found in the timeline, but sometimes the fused operators may not exist in the timeline due to it occurring within another operation. We go over an example of this case in Part 2.
### [Understanding the Low Level Timeline](#id17)[#](#understanding-the-low-level-timeline "Permalink to this headline")
Looking at the trace we can look behind the scenes at how the model is executed on neuron hardware. Before proceeding with the analysis, it is worth recalling the way we defined the model for this tutorial:
```
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
```
#### Analysis[#](#analysis "Permalink to this headline")
**Input Operators:** We see input operators here. This is because in a markstep flow, we need to transfer inputs to the xla device. This is represented by the `SyncTensorsGraph.53` call.
**ReLU at the beginning:** The first couple of blocks in the Process Data Transfer section initially appear to be confusing. There is an `Input` (0 ns.) block followed by a `ReLU` (100 ns.) operator. Under the hood here, `ReLU` is rewritten as an `elementwise_max(arr,0)`, (0 here means an array with zeros) but to create this operation, the zeros have to be set in memory, which is a data operation. A general rule is that if an operator appears this early in the data transfer section, it most likely means there is an operation lowering involving setting some values into memory for use later on.
**Memory allocation for Linear\[layer1\]:** We resume with the data transfer operations. Here, memory is getting allocated for specific operators, and sometimes the allocated inputs get loaded onto operators while the rest of the input gets allocated. This can be seen at `fused_18` (11811 ns.) and `fused_23` (12181 ns.). Eventually the input gets fully allocated, and other allocations occur for dot products, transpose, and broadcast operators for `Linear[layer1]` and `Linear[layer2]`.
#### Conclusion[#](#conclusion "Permalink to this headline")
There are a few conclusions that can be determined from analyzing the timeline. We can see that we’ve been able to save a bit of time due to parallelism with fusion operations, and saving some compute time with preloading operations (ex. `ReLU`). A clear trend is that a majority of the time is spent on data transfer operations. It is also evident that even a simple Feed Forward NN becomes complicated when put under a microscope in the profiler. Facts such as the implementation of `ReLU` in the runtime/architecture, aren’t explicitly stated in the profiler, but do make themselves known by the unusual ordering placement of the trace blocks and unusual fusion operators.
In terms of action items that can be taken based on our narrative, there really isn’t any. This is a very very simple model that outputs after 8 microseconds, and we chose it because it is simple to understand. In more realistic examples we will aim to do more compute than data transfer on the hardware, and where possible to overlap data transfer and compute between sequential operations.
The profiler revealed a lot of optimizations that were done, via fusion operators and parallelism. However, the end goal of this tool is to be able to improve performance by revealing the bottlenecks of the model.
Note
While we did explain some of the quirks visible in the profiler at a microscopic level, it isn’t necessary to do so for normal use. This tutorial introduced the microscopic explanation for these occurrences to show to the user that this is _indeed_ what happens in the hardware when executing a simple FFNN.
## [Part 2: Operator Level Trace with `torch_neuronx.trace()` workflow](#id18)[#](#part-2-operator-level-trace-with-torch-neuronx-trace-workflow "Permalink to this headline")
### [Set Up](#id19)[#](#id2 "Permalink to this headline")
The setup will be similar to Part 1.
```
cd ~/neuron_profiling_tensorboard_examples
mkdir tutorial_2
cd tutorial_2
# this is where our code will be written
touch run.py
```
Here is the code for `run.py`:
```
import os
import time
import torch
import torch_neuronx
from torch_neuronx.experimental import profiler
class NN(torch.nn.Module):
def __init__(self):
super().__init__()
self.layer1 = torch.nn.Linear(4,4)
self.nl1 = torch.nn.ReLU()
self.layer2 = torch.nn.Linear(4,2)
self.nl2 = torch.nn.Tanh()
def forward(self, x):
x = self.nl1(self.layer1(x))
return self.nl2(self.layer2(x))
model = NN()
model.eval()
inp = torch.rand(4,4)
output = model(inp)
with torch_neuronx.experimental.profiler.profile(
port=9012,
profile_type='operator',
ms_duration=10000,
traced_only=True):
neuron_model = torch_neuronx.trace(model,inp,compiler_workdir="./compiler_cache")
neuron_model(inp)
print("==CPU OUTPUT==")
print(output)
print()
print("==INF2 OUTPUT==")
print(output_neuron)
```
### [Important code differences from Part 1](#id20)[#](#important-code-differences-from-part-1 "Permalink to this headline")
1. `import torch_xla.core.xla_model as xm` is no longer necessary
2. Set `traced_only=True` in `torch_neuronx.experimental.profiler.profile()`. This option is necessary for traced models, otherwise the generated profile will not be accurate or not work.
3. Tracing the model with `torch_neuronx.trace()` and removing `xm.markstep()`.
Otherwise, the code is the same as Part 1.
### [Running Part 2](#id21)[#](#running-part-2 "Permalink to this headline")
To Run:
The output will look almost identical as Part 1
### [Loading the Operators Level Trace in TensorBoard](#id22)[#](#id3 "Permalink to this headline")
Run `tensorboard --load_fast=false --logdir logs/`, just like Part 1.
Timeline View:

### [Notable Differences in Timeline View from Part 1:](#id23)[#](#notable-differences-in-timeline-view-from-part-1 "Permalink to this headline")
**No Input Operators:** For a traced model, we do not transfer the input to an xla device, so these operations are not seen on the timeline. This also affects scheduling, which is why the time taken in the profiling is less than the markstep one.
**Combined Loading of Linear\[layer1\] and Tanh:** `fused_19` (5824 ns) contains a fusion between `Linear[layer1]` and `Tanh[nl2]`. This might be a bit odd, but such data loading parallelism can be understood by understanding how tanh is implemented. Typically, functions like tanh are implemented by lookup tables that require being pre-loaded onto memory, which is a data transfer operation. A bulk of data transfer operations are done in the beginning to optimize computations.
Note
Despite these differences, the big picture conclusion drawn from Part 1 still holds, as the two timelines are more similar than different. Some new insights drawn is that the traced model performs better than the markstep flow, since this was profiling a single forward pass.
_This document is relevant for_: `Inf1`, `Inf2`, `Trn1`, `Trn1n`
|
<!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Profiling PyTorch Neuron (torch-neuronx) with TensorBoard — AWS Neuron Documentation</title>
<!-- Loaded before other Sphinx assets -->
<link href="../../_static/styles/theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link href="../../_static/styles/pydata-sphinx-theme.css?digest=1999514e3f237ded88cf" rel="stylesheet">
<link rel="stylesheet" href="../../_static/vendor/fontawesome/5.13.0/css/all.min.css">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-solid-900.woff2">
<link rel="preload" as="font" type="font/woff2" crossorigin="" href="../../_static/vendor/fontawesome/5.13.0/webfonts/fa-brands-400.woff2">
<link rel="stylesheet" type="text/css" href="../../_static/pygments.css">
<link rel="stylesheet" href="../../_static/styles/sphinx-book-theme.css?digest=5115cc725059bd94278eecd172e13a965bf8f5a9" type="text/css">
<link rel="stylesheet" type="text/css" href="../../_static/css/custom.css">
<link rel="stylesheet" type="text/css" href="../../_static/styles/sphinx-book-theme.css">
<link rel="stylesheet" type="text/css" href="../../_static/contentui.css">
<link rel="stylesheet" type="text/css" href="../../_static/design-style.4045f2051d55cab465a707391d5b2007.min.css">
<link rel="stylesheet" type="text/css" href="/_/static/css/badge_only.css">
<!-- Pre-loaded scripts that we'll load fully later -->
<link rel="preload" as="script" href="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf">
<script type="text/javascript" async="" src="https://www.googletagmanager.com/gtag/js?id=G-2Q13EGB80H&l=dataLayer&cx=c"></script><script type="text/javascript" async="" src="https://www.google-analytics.com/analytics.js"></script><script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script>
<script src="../../_static/jquery.js"></script>
<script src="../../_static/underscore.js"></script>
<script src="../../_static/doctools.js"></script>
<script src="../../_static/scripts/sphinx-book-theme.js?digest=9c920249402e914e316237a7dbc6769907cce411"></script>
<script src="../../_static/contentui.js"></script>
<script src="../../_static/design-tabs.js"></script>
<script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script>
<script async="async" src="/_/static/javascript/readthedocs-doc-embed.js"></script>
<link rel="index" title="Index" href="../../genindex.html">
<link rel="search" title="Search" href="../../search.html">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="docsearch:language" content="en">
<!-- Google Analytics -->
<style type="text/css">
ul.ablog-archive {
list-style: none;
overflow: auto;
margin-left: 0px;
}
ul.ablog-archive li {
float: left;
margin-right: 5px;
font-size: 80%;
}
ul.postlist a {
font-style: italic;
}
ul.postlist-style-disc {
list-style-type: disc;
}
ul.postlist-style-none {
list-style-type: none;
}
ul.postlist-style-circle {
list-style-type: circle;
}
</style>
<!-- RTD Extra Head -->
<link rel="stylesheet" href="/_/static/css/readthedocs-doc-embed.css" type="text/css">
<script type="application/json" id="READTHEDOCS_DATA">{"ad_free": false, "api_host": "https://readthedocs.com", "builder": "sphinx", "canonical_url": null, "docroot": "/", "features": {"docsearch_disabled": false}, "global_analytics_code": "UA-17997319-2", "language": "en", "page": "tools/tutorials/torch-neuronx-profiling-with-tb", "programming_language": "py", "project": "awsdocs-neuron", "proxied_api_host": "/_", "source_suffix": ".rst", "subprojects": {}, "theme": "sphinx_book_theme", "user_analytics_code": "G-2Q13EGB80H", "version": "v2.14.1"}</script>
<!--
Using this variable directly instead of using `JSON.parse` is deprecated.
The READTHEDOCS_DATA global variable will be removed in the future.
-->
<script type="text/javascript">
READTHEDOCS_DATA = JSON.parse(document.getElementById('READTHEDOCS_DATA').innerHTML);
</script>
<script type="text/javascript" src="/_/static/javascript/readthedocs-analytics.js" async="async"></script>
<!-- end RTD <extrahead> -->
<script src="https://www.googletagmanager.com/gtag/js?id=UA-17997319-2" type="text/javascript" async=""></script><meta http-equiv="origin-trial" content="AymqwRC7u88Y4JPvfIF2F37QKylC04248hLCdJAsh8xgOfe/dVJPV3XS3wLFca1ZMVOtnBfVjaCMTVudWM//5g4AAAB7eyJvcmlnaW4iOiJodHRwczovL3d3dy5nb29nbGV0YWdtYW5hZ2VyLmNvbTo0NDMiLCJmZWF0dXJlIjoiUHJpdmFjeVNhbmRib3hBZHNBUElzIiwiZXhwaXJ5IjoxNjk1MTY3OTk5LCJpc1RoaXJkUGFydHkiOnRydWV9"></head>
<body data-spy="scroll" data-target="#bd-toc-nav" data-offset="60" class="scrolled">
<!-- Checkboxes to toggle the left sidebar -->
<input type="checkbox" class="sidebar-toggle" name="__navigation" id="__navigation" aria-label="Toggle navigation sidebar">
<label class="overlay overlay-navbar" for="__navigation">
<div class="visually-hidden">Toggle navigation sidebar</div>
</label>
<!-- Checkboxes to toggle the in-page toc -->
<input type="checkbox" class="sidebar-toggle" name="__page-toc" id="__page-toc" aria-label="Toggle in-page Table of Contents">
<label class="overlay overlay-pagetoc" for="__page-toc">
<div class="visually-hidden">Toggle in-page Table of Contents</div>
</label>
<!-- Headers at the top -->
<div class="announcement header-item noprint">Neuron 2.14.0 is released! check <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release"> What's New </a> and <a class="reference internal" style="color:white;" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html"> Announcements </a></div>
<div class="header header-item noprint"></div>
<div class="container-fluid" id="banner"></div>
<div class="container-xl">
<div class="row">
<!-- Sidebar -->
<div class="bd-sidebar noprint" id="site-navigation">
<div class="bd-sidebar__content">
<div class="bd-sidebar__top"><div class="navbar-brand-box">
<a class="navbar-brand text-wrap" href="../../index.html">
<!-- `logo` is deprecated in Sphinx 4.0, so remove this when we stop supporting 3 -->
<img src="../../_static/Site-Merch_Neuron-ML-SDK_Editorial.png" class="logo" alt="logo">
<h1 class="site-logo" id="site-title">AWS Neuron Documentation</h1>
</a>
</div><form class="bd-search d-flex align-items-center" action="../../search.html" method="get">
<i class="icon fas fa-search"></i>
<input type="search" class="form-control" name="q" id="search-input" placeholder="Search the docs ..." aria-label="Search the docs ..." autocomplete="off">
</form><nav class="bd-links" id="bd-docs-nav" aria-label="Main">
<div class="bd-toc-item active">
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Overview
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../general/quick-start/docs-quicklinks.html">
Quick Links
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/quick-start/index.html">
Get Started with Neuron
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/quick-start/github-samples.html">
GitHub Samples
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/benchmarks/index.html">
Performance
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../release-notes/index.html">
What’s New
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/announcements/index.html">
Announcements
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Frameworks
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/torch/index.html">
PyTorch Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-1" name="toctree-checkbox-1" type="checkbox">
<label for="toctree-checkbox-1">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/torch/torch-setup.html">
Pytorch Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/inference-torch-neuronx.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-2" name="toctree-checkbox-2" type="checkbox">
<label for="toctree-checkbox-2">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorials-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-3" name="toctree-checkbox-3" type="checkbox">
<label for="toctree-checkbox-3">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/bert-base-cased-finetuned-mrpc-inference-on-trn1-tutorial.html">
Compiling and Deploying HuggingFace Pretrained BERT on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/inference/tutorial-torchserve-neuronx.html">
BERT TorchServe Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorial-libtorch.html">
LibTorch C++ Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/resnet50-inference-on-trn1-tutorial.html">
Compiling and Deploying ResNet50 on Trn1 or Inf2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/pytorch/torch-neuronx/t5-inference-tutorial.html">
T5 model inference on Trn1 or Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-inference-torch-neuronx.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-4" name="toctree-checkbox-4" type="checkbox">
<label for="toctree-checkbox-4">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/">
AWS Neuron Samples GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/transformers-neuronx">
Transformers Neuron GitHub samples
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/inference-api-guide-torch-neuronx.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-5" name="toctree-checkbox-5" type="checkbox">
<label for="toctree-checkbox-5">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-trace.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Tracing API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) NeuronCore Placement APIs
<strong>
[Experimental]
</strong>
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-analyze.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Analyze API for Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/inference/api-torch-neuronx-data-parallel.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) DataParallel API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-6" name="toctree-checkbox-6" type="checkbox">
<label for="toctree-checkbox-6">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/core-placement.html">
NeuronCore Allocation and Model Placement for Inference (
<span class="xref std std-ref">
torch-neuronx
</span>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/inference/trace-vs-xla-lazytensor.html">
Comparison of Traced Inference versus XLA
<span class="xref std std-ref">
Lazy Tensor
</span>
Inference (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/torch-neuronx/torch-neuronx-dataparallel-app-note.html">
Data Parallel Inference on torch_neuronx
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-inference-torch-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-7" name="toctree-checkbox-7" type="checkbox">
<label for="toctree-checkbox-7">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/inference-torch-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-8" name="toctree-checkbox-8" type="checkbox">
<label for="toctree-checkbox-8">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-inference-torch-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-9" name="toctree-checkbox-9" type="checkbox">
<label for="toctree-checkbox-9">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-torch-neuron-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/tutorials/tutorials-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/additional-examples-inference-torch-neuron.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-10" name="toctree-checkbox-10" type="checkbox">
<label for="toctree-checkbox-10">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-reference-guide-torch-neuron.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-11" name="toctree-checkbox-11" type="checkbox">
<label for="toctree-checkbox-11">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-compilation-python-api.html">
PyTorch Neuron trace Python API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-torch-neuron-dataparallel-api.html">
torch.neuron.DataParallel API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/api-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement API [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/developer-guide-torch-neuron.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-12" name="toctree-checkbox-12" type="checkbox">
<label for="toctree-checkbox-12">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/torch-neuron/bucketing-app-note.html">
Running Inference on Variable Input Shapes with Bucketing
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/torch-neuron/torch-neuron-dataparallel-app-note.html">
Data Parallel Inference on PyTorch Neuron
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/torch-lstm-support.html">
Developer Guide - PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
<code class="xref py py-class docutils literal notranslate">
<span class="pre">
LSTM
</span>
</code>
Support
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/guides/core-placement/torch-core-placement.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Core Placement
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/misc-inference-torch-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-13" name="toctree-checkbox-13" type="checkbox">
<label for="toctree-checkbox-13">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-pytorch.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) Supported operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuron/troubleshooting-guide.html">
Troubleshooting Guide for PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuron/torch-neuron.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuron
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/torch/training-torch-neuronx.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-14" name="toctree-checkbox-14" type="checkbox">
<label for="toctree-checkbox-14">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/tutorials-training-torch-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-15" name="toctree-checkbox-15" type="checkbox">
<label for="toctree-checkbox-15">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/bert.html">
Hugging Face BERT Pretraining Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/mlp.html">
Multi-Layer Perceptron Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html">
PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/finetune_t5.html">
Fine-tune T5 model on Trn1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/zero1_gpt2.html">
ZeRO-1 Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/tutorials/training/analyze_for_training.html">
Analyze for Training Tutorial
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/additional-examples-training.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-16" name="toctree-checkbox-16" type="checkbox">
<label for="toctree-checkbox-16">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/neuronx-nemo-megatron">
AWS Neuron Reference for Nemo Megatron GitHub Repository
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-eks-samples">
AWS Neuron Samples for EKS
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-parallelcluster-samples">
AWS Neuron Samples for AWS ParallelCluster
</a>
</li>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/torch-neuronx/training">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/index.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-17" name="toctree-checkbox-17" type="checkbox">
<label for="toctree-checkbox-17">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/pytorch-neuron-parallel-compile.html">
PyTorch Neuron neuron_parallel_compile CLI (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/training/torch-neuron-envvars.html">
PyTorch Neuron Environment Variables (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/api-reference-guide/torch-neuronx-profiling-api.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) Profiling API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/index.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-18" name="toctree-checkbox-18" type="checkbox">
<label for="toctree-checkbox-18">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-programming-guide.html">
Developer Guide for Training with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/training/pytorch-neuron-debug.html">
How to debug models in PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html">
Developer Guide for Profiling with PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/misc-training.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-19" name="toctree-checkbox-19" type="checkbox">
<label for="toctree-checkbox-19">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/pytorch-neuron-supported-operators.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) - Supported Operators
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/setup-trn1-multi-node-execution.html">
How to prepare trn1.32xlarge for multi-node execution
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/torch/torch-neuronx/training-troubleshooting.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) for Training Troubleshooting Guide
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/torch/torch-neuronx/index.html">
PyTorch Neuron (
<code class="docutils literal notranslate">
<span class="pre">
torch-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/index.html">
TensorFlow Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-20" name="toctree-checkbox-20" type="checkbox">
<label for="toctree-checkbox-20">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-setup.html">
Tensorflow Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx-inference.html">
Inference (Inf2 & Trn1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-21" name="toctree-checkbox-21" type="checkbox">
<label for="toctree-checkbox-21">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorials-tensorflow-neuronx.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-22" name="toctree-checkbox-22" type="checkbox">
<label for="toctree-checkbox-22">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../src/examples/tensorflow/tensorflow-neuronx/tfneuronx-roberta-base-tutorial.html">
HuggingFace Roberta-Base
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tutorials/tutorial-tensorflowx-serving-NeuronRT-Visible-Cores.html">
Using NEURON_RT_VISIBLE_CORES with TensorFlow Serving
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-23" name="toctree-checkbox-23" type="checkbox">
<label for="toctree-checkbox-23">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfneuronx-python-tracing-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tf-neuronx-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/tfnx-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) analyze_model API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuronx/misc-tensorflow-neuronx.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-24" name="toctree-checkbox-24" type="checkbox">
<label for="toctree-checkbox-24">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuronx/tensorflow-neuronx.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuronx
</span>
</code>
) Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron-inference.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-25" name="toctree-checkbox-25" type="checkbox">
<label for="toctree-checkbox-25">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-26" name="toctree-checkbox-26" type="checkbox">
<label for="toctree-checkbox-26">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tutorials/tutorials-tensorflow-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/additional-examples.html">
Additional Examples
</a>
<input class="toctree-checkbox" id="toctree-checkbox-27" name="toctree-checkbox-27" type="checkbox">
<label for="toctree-checkbox-27">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/tree/master/tensorflow-neuron/inference">
AWS Neuron Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-28" name="toctree-checkbox-28" type="checkbox">
<label for="toctree-checkbox-28">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tracing-python-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Tracing API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-tfn-analyze-model-api.html">
TensorFlow 2.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) analyze_model API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-compilation-python-api.html">
TensorFlow 1.x (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Compilation API
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/api-auto-replication-api.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
</code>
) Auto Multicore Replication (Experimental)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/misc-tensorflow-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-29" name="toctree-checkbox-29" type="checkbox">
<label for="toctree-checkbox-29">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tensorflow/tensorflow-neuron/tensorflow-neuron-v2.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/tensorflow/tensorflow-neuron/tensorflow2-accelerated-ops.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF2.x)
</span>
</code>
) Accelerated (torch-neuron) Python APIs and Graph Ops
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-tensorflow.html">
TensorFlow Neuron (
<code class="docutils literal notranslate">
<span class="pre">
tensorflow-neuron
</span>
<span class="pre">
(TF1.x)
</span>
</code>
) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/tensorflow/training.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/index.html">
Apache MXNet (Incubating)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-30" name="toctree-checkbox-30" type="checkbox">
<label for="toctree-checkbox-30">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../frameworks/mxnet-neuron/mxnet-neuron-setup.html">
MXNet Neuron Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/inference-mxnet-neuron.html">
Inference (Inf1)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-31" name="toctree-checkbox-31" type="checkbox">
<label for="toctree-checkbox-31">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-neuron.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-32" name="toctree-checkbox-32" type="checkbox">
<label for="toctree-checkbox-32">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-computervision.html">
Computer Vision Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-nlp.html">
Natural Language Processing (NLP) Tutorials
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/tutorials/tutorials-mxnet-utilizing-neuron-capabilities.html">
Utilizing Neuron Capabilities Tutorials
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-33" name="toctree-checkbox-33" type="checkbox">
<label for="toctree-checkbox-33">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/api-compilation-python-api.html">
Neuron Apache MXNet (Incubating) Compilation Python API
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-34" name="toctree-checkbox-34" type="checkbox">
<label for="toctree-checkbox-34">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/mxnet-neuron/flex-eg.html">
Flexible Execution Group (FlexEG) in Neuron-MXNet
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../frameworks/mxnet-neuron/misc-mxnet-neuron.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-35" name="toctree-checkbox-35" type="checkbox">
<label for="toctree-checkbox-35">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../frameworks/mxnet-neuron/troubleshooting-guide.html">
Troubleshooting Guide for Neuron Apache MXNet (Incubating)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/mxnet-neuron/mxnet-neuron.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/neuron-cc-ops-mxnet.html">
Neuron Apache MXNet (Incubating) Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
ML Libraries
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/index.html">
Transformers Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-36" name="toctree-checkbox-36" type="checkbox">
<label for="toctree-checkbox-36">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../libraries/transformers-neuronx/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-37" name="toctree-checkbox-37" type="checkbox">
<label for="toctree-checkbox-37">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-developer-guide.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) Developer Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-38" name="toctree-checkbox-38" type="checkbox">
<label for="toctree-checkbox-38">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/meta-llama-2-13b-sampling.ipynb">
Hugging Face meta-llama/Llama-2-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-13b-sampling.ipynb">
Hugging Face facebook/opt-13b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-30b-sampling.ipynb">
Hugging Face facebook/opt-30b autoregressive sampling on Inf2 & Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-samples/blob/master/torch-neuronx/transformers-neuronx/inference/facebook-opt-66b-sampling.ipynb">
Hugging Face facebook/opt-66b autoregressive sampling on Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/transformers-neuronx/transformers-neuronx-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-39" name="toctree-checkbox-39" type="checkbox">
<label for="toctree-checkbox-39">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/torch/transformers-neuronx/index.html">
Transformers Neuron (
<code class="docutils literal notranslate">
<span class="pre">
transformers-neuronx
</span>
</code>
) release notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/index.html">
Neuron Distributed
</a>
<input class="toctree-checkbox" id="toctree-checkbox-40" name="toctree-checkbox-40" type="checkbox">
<label for="toctree-checkbox-40">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../libraries/neuronx-distributed/setup/index.html">
Setup
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/app_notes.html">
App Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-41" name="toctree-checkbox-41" type="checkbox">
<label for="toctree-checkbox-41">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tensor_parallelism_overview.html">
Tensor Parallelism Overview
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-42" name="toctree-checkbox-42" type="checkbox">
<label for="toctree-checkbox-42">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/api_guide.html">
API Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-43" name="toctree-checkbox-43" type="checkbox">
<label for="toctree-checkbox-43">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tp_developer_guide.html">
Developer guide for Tensor Parallelism (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/index.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-44" name="toctree-checkbox-44" type="checkbox">
<label for="toctree-checkbox-44">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training.html">
Training using Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox.html">
Training GPT-NeoX 6.9B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/training-gpt-neox-20b.html">
Training GPT-NeoX 20B using TP and ZeRO-1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../src/examples/pytorch/neuronx_distributed/t5-inference/t5-inference-tutorial.html">
T5 inference with Tensor Parallelism
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../libraries/neuronx-distributed/tutorials/inference.html">
Inference using Tensor Parallelism
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../libraries/neuronx-distributed/neuronx-distributed-misc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-45" name="toctree-checkbox-45" type="checkbox">
<label for="toctree-checkbox-45">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/neuronx-distributed/neuronx-distributed.html">
Neuron Distributed Release Notes (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-distributed
</span>
</code>
)
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../libraries/nemo-megatron/index.html">
AWS Neuron Reference for NeMo Megatron
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
User Guide
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../neuron-runtime/index.html">
Neuron Runtime
</a>
<input class="toctree-checkbox" id="toctree-checkbox-46" name="toctree-checkbox-46" type="checkbox">
<label for="toctree-checkbox-46">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-47" name="toctree-checkbox-47" type="checkbox">
<label for="toctree-checkbox-47">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-api-guide.html">
Runtime API
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/configuration-guide.html">
Configuration Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-48" name="toctree-checkbox-48" type="checkbox">
<label for="toctree-checkbox-48">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-configurable-parameters.html">
Runtime Configuration
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-runtime/misc-runtime.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-49" name="toctree-checkbox-49" type="checkbox">
<label for="toctree-checkbox-49">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/nrt-troubleshoot.html">
Troubleshooting on Inf1 and Trn1
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-runtime/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-runtime-lib/index.html">
Neuron Runtime Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-dkms/index.html">
Neuron Driver Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/runtime/aws-neuronx-collectives/index.html">
Neuron Collectives Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../compiler/index.html">
Neuron Compiler
</a>
<input class="toctree-checkbox" id="toctree-checkbox-50" name="toctree-checkbox-50" type="checkbox">
<label for="toctree-checkbox-50">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc.html">
Neuron Compiler for Trn1 & Inf2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-51" name="toctree-checkbox-51" type="checkbox">
<label for="toctree-checkbox-51">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-52" name="toctree-checkbox-52" type="checkbox">
<label for="toctree-checkbox-52">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.html">
Neuron Compiler CLI Reference Guide
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-53" name="toctree-checkbox-53" type="checkbox">
<label for="toctree-checkbox-53">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/neuronx-cc/neuronx-cc-training-mixed-precision.html">
Mixed Precision and Performance-accuracy Tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuronx-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuronx-cc/misc-neuronx-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-54" name="toctree-checkbox-54" type="checkbox">
<label for="toctree-checkbox-54">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuronx-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuronx-cc/index.html">
What's New
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../compiler/neuron-cc.html">
Neuron Compiler for Inf1
</a>
<input class="toctree-checkbox" id="toctree-checkbox-55" name="toctree-checkbox-55" type="checkbox">
<label for="toctree-checkbox-55">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-56" name="toctree-checkbox-56" type="checkbox">
<label for="toctree-checkbox-56">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/command-line-reference.html">
Neuron compiler CLI Reference Guide (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/developer-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-57" name="toctree-checkbox-57" type="checkbox">
<label for="toctree-checkbox-57">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/appnotes/neuron-cc/mixed-precision.html">
Mixed precision and performance-accuracy tuning (
<code class="docutils literal notranslate">
<span class="pre">
neuron-cc
</span>
</code>
)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../compiler/neuron-cc/misc-neuron-cc.html">
Misc
</a>
<input class="toctree-checkbox" id="toctree-checkbox-58" name="toctree-checkbox-58" type="checkbox">
<label for="toctree-checkbox-58">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../compiler/neuron-cc/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc.html">
What's New
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/compiler/neuron-cc/neuron-cc-ops/index.html">
Neuron Supported operators
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../neuron-customops/index.html">
Neuron C++ Custom Operators
</a>
<input class="toctree-checkbox" id="toctree-checkbox-59" name="toctree-checkbox-59" type="checkbox">
<label for="toctree-checkbox-59">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/api-reference-guide/api-reference-guide.html">
API Reference Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-60" name="toctree-checkbox-60" type="checkbox">
<label for="toctree-checkbox-60">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/api-reference-guide/custom-ops-ref-guide.html">
Custom Operators API Reference Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/programming-guide/programming-guide.html">
Developer Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-61" name="toctree-checkbox-61" type="checkbox">
<label for="toctree-checkbox-61">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/programming-guide/custom-c%2B%2B-operators-devguide.html">
Neuron Custom C++ Operators Developer Guide [Experimental]
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/tutorials/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-62" name="toctree-checkbox-62" type="checkbox">
<label for="toctree-checkbox-62">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-training.html">
Neuron Custom C++ Operators in MLP Training
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../neuron-customops/tutorials/customop-mlp-perf-opt.html">
Neuron Custom C++ Operators Performance Optimization
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../neuron-customops/misc-customops.html">
Misc (Neuron Custom C++ Operators)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-63" name="toctree-checkbox-63" type="checkbox">
<label for="toctree-checkbox-63">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/customcxxps/gpsimd-tools.html">
Neuron Custom C++ Tools Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/customcxxps/gpsimd-customop-lib.html">
Neuron Custom C++ Library Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../index.html">
Neuron Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-64" name="toctree-checkbox-64" type="checkbox">
<label for="toctree-checkbox-64">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuron-sys-tools/index.html">
System Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-65" name="toctree-checkbox-65" type="checkbox">
<label for="toctree-checkbox-65">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-monitor-user-guide.html">
Neuron-Monitor User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-top-user-guide.html">
Neuron-Top User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-ls.html">
Neuron-LS User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-profile-user-guide.html">
Neuron Profile User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/neuron-sysfs-user-guide.html">
Neuron-Sysfs User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuron-sys-tools/nccom-test.html">
NCCOM-TEST User Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/tools/aws-neuronx-tools.html">
What's New
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../tensorboard/index.html">
TensorBoard
</a>
<input class="toctree-checkbox" id="toctree-checkbox-66" name="toctree-checkbox-66" type="checkbox">
<label for="toctree-checkbox-66">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="tutorial-tensorboard-scalars-mnist.html">
Track Training Progress in TensorBoard using PyTorch Neuron
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tensorboard/getting-started-tensorboard-neuronx-plugin.html">
TensorBoard Plugin for Neuron (Trn1)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/tools/tensorboard-neuron.html">
What's New
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../tensorboard/getting-started-tensorboard-neuron-plugin.html">
TensorBoard Plugin for Neuron (Inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../helper-tools/index.html">
Helper Tools
</a>
<input class="toctree-checkbox" id="toctree-checkbox-67" name="toctree-checkbox-67" type="checkbox">
<label for="toctree-checkbox-67">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../helper-tools/tutorial-neuron-check-model.html">
Check Model
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../helper-tools/tutorial-neuron-gatherinfo.html">
GatherInfo
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../neuronperf/index.html">
NeuronPerf (Beta)
</a>
<input class="toctree-checkbox" id="toctree-checkbox-68" name="toctree-checkbox-68" type="checkbox">
<label for="toctree-checkbox-68">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_overview.html">
Overview
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_terminology.html">
Terminology
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_examples.html">
Examples
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_benchmark_guide.html">
Benchmark Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_evaluate_guide.html">
Evaluate Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_compile_guide.html">
Compile Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_model_index_guide.html">
Model Index Guide
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_api.html">
API
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_framework_notes.html">
Framework Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../neuronperf/neuronperf_troubleshooting.html">
Troubleshooting
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../neuronperf/rn.html">
What’s New
</a>
<input class="toctree-checkbox" id="toctree-checkbox-69" name="toctree-checkbox-69" type="checkbox">
<label for="toctree-checkbox-69">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/tools/neuronperf.html">
NeuronPerf 1.x Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/calculator/neuron-calculator.html">
Neuron Calculator
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/setup/index.html">
Setup Guide
</a>
<input class="toctree-checkbox" id="toctree-checkbox-70" name="toctree-checkbox-70" type="checkbox">
<label for="toctree-checkbox-70">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/torch-neuronx.html">
PyTorch Neuron (torch-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/torch-neuron.html">
PyTorch Neuron (torch-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/tensorflow-neuronx.html">
Tensorflow Neuron (tensorflow-neuronx)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/tensorflow-neuron.html">
Tensorflow Neuron (tensorflow-neuron)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/setup/mxnet-neuron.html">
MxNet Neuron (mxnet-neuron)
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../containers/index.html">
Containers Deployment
</a>
<input class="toctree-checkbox" id="toctree-checkbox-71" name="toctree-checkbox-71" type="checkbox">
<label for="toctree-checkbox-71">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-72" name="toctree-checkbox-72" type="checkbox">
<label for="toctree-checkbox-72">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials/inference/index.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-73" name="toctree-checkbox-73" type="checkbox">
<label for="toctree-checkbox-73">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/tutorial-infer.html">
Run inference in pytorch neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/k8s_rn50_demo.html">
Deploy a TensorFlow Resnet50 model as a Kubernetes service
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials/training/index.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-74" name="toctree-checkbox-74" type="checkbox">
<label for="toctree-checkbox-74">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/tutorial-training.html">
Run training in Pytorch Neuron container
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/k8s_mlp_train_demo.html">
Deploy a simple mlp training script as a Kubernetes job
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-75" name="toctree-checkbox-75" type="checkbox">
<label for="toctree-checkbox-75">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-76" name="toctree-checkbox-76" type="checkbox">
<label for="toctree-checkbox-76">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/devflows/index.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-77" name="toctree-checkbox-77" type="checkbox">
<label for="toctree-checkbox-77">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../containers/index.html">
Deploy Containers with Neuron
</a>
<input class="toctree-checkbox" id="toctree-checkbox-78" name="toctree-checkbox-78" type="checkbox">
<label for="toctree-checkbox-78">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/locate-neuron-dlc-image.html">
Locate Neuron DLC Image
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/getting-started.html">
Getting Started
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../containers/kubernetes-getting-started.html">
Kubernetes Getting Started
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/tutorials.html">
Tutorials
</a>
<input class="toctree-checkbox" id="toctree-checkbox-79" name="toctree-checkbox-79" type="checkbox">
<label for="toctree-checkbox-79">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/inference/index.html">
Inference
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/tutorials/training/index.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/developerflows.html">
Developer Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-80" name="toctree-checkbox-80" type="checkbox">
<label for="toctree-checkbox-80">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-ec2-devflow.html">
Deploy Neuron Container on EC2
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/container-sm-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../containers/faq-troubleshooting-releasenote.html">
FAQ, Troubleshooting and Release Note
</a>
<input class="toctree-checkbox" id="toctree-checkbox-81" name="toctree-checkbox-81" type="checkbox">
<label for="toctree-checkbox-81">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/faq.html">
FAQ
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../containers/troubleshooting.html">
Troubleshooting Neuron Containers
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/containers/neuron-containers.html">
Neuron Containers Release Notes
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../release-notes/containers/neuron-k8.html">
Neuron K8 Release Notes
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/ec2-flows.html">
AWS EC2
</a>
<input class="toctree-checkbox" id="toctree-checkbox-82" name="toctree-checkbox-82" type="checkbox">
<label for="toctree-checkbox-82">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/ec2-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-83" name="toctree-checkbox-83" type="checkbox">
<label for="toctree-checkbox-83">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/ec2-then-ec2-devflow.html">
Compile with Framework API and Deploy on EC2 Inf1
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/ec2-then-ec2-devflow-inf2.html">
Compile with Framework API and Deploy on EC2 Inf2
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/training/ec2-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-84" name="toctree-checkbox-84" type="checkbox">
<label for="toctree-checkbox-84">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/training/ec2/ec2-training.html">
Train your model on EC2
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/eks-flows.html">
Amazon EKS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-85" name="toctree-checkbox-85" type="checkbox">
<label for="toctree-checkbox-85">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/eks-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-86" name="toctree-checkbox-86" type="checkbox">
<label for="toctree-checkbox-86">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/dlc-then-eks-devflow.html">
Deploy Neuron Container on Elastic Kubernetes Service (EKS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/training/eks-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/ecs-flows.html">
AWS ECS
</a>
<input class="toctree-checkbox" id="toctree-checkbox-87" name="toctree-checkbox-87" type="checkbox">
<label for="toctree-checkbox-87">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/ecs-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-88" name="toctree-checkbox-88" type="checkbox">
<label for="toctree-checkbox-88">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/dlc-then-ecs-devflow.html">
Deploy Neuron Container on Elastic Container Service (ECS)
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/training/ecs-flows.html">
Training
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/sagemaker-flows.html">
Sagemaker
</a>
<input class="toctree-checkbox" id="toctree-checkbox-89" name="toctree-checkbox-89" type="checkbox">
<label for="toctree-checkbox-89">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/inference/sagemaker-flows.html">
Inference
</a>
<input class="toctree-checkbox" id="toctree-checkbox-90" name="toctree-checkbox-90" type="checkbox">
<label for="toctree-checkbox-90">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/byoc-hosting-devflow-inf2.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf2 or trn1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/byoc-hosting-devflow.html">
Bring Your Own Neuron Container to Sagemaker Hosting (inf1)
</a>
</li>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/inference/neo-then-hosting-devflow.html">
Compile with Sagemaker Neo and Deploy on Sagemaker Hosting (inf1)
</a>
</li>
</ul>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/training/sagemaker-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-91" name="toctree-checkbox-91" type="checkbox">
<label for="toctree-checkbox-91">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/training/sm-devflow/sm-training-devflow.html">
Train your model on SageMaker
</a>
</li>
</ul>
</li>
<li class="toctree-l3">
<a class="reference external" href="https://github.com/aws-neuron/aws-neuron-sagemaker-samples">
AWS Neuron Sagemaker Samples GitHub Repository
</a>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/parallelcluster-flows.html">
Parallel Cluster
</a>
<input class="toctree-checkbox" id="toctree-checkbox-92" name="toctree-checkbox-92" type="checkbox">
<label for="toctree-checkbox-92">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/inference/parallelcluster-flows.html">
Inference
</a>
</li>
<li class="toctree-l3 has-children">
<a class="reference internal" href="../../general/devflows/training/parallelcluster-flows.html">
Training
</a>
<input class="toctree-checkbox" id="toctree-checkbox-93" name="toctree-checkbox-93" type="checkbox">
<label for="toctree-checkbox-93">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l4">
<a class="reference internal" href="../../general/devflows/training/parallelcluster/parallelcluster-training.html">
Train your model on ParallelCluster
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2 has-children">
<a class="reference internal" href="../../general/devflows/aws-batch-flows.html">
AWS Batch Flows
</a>
<input class="toctree-checkbox" id="toctree-checkbox-94" name="toctree-checkbox-94" type="checkbox">
<label for="toctree-checkbox-94">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/inference/aws-batch-flows.html">
Inference
</a>
</li>
<li class="toctree-l3">
<a class="reference internal" href="../../general/devflows/training/aws-batch-flows.html">
Training
</a>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
Learning Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/arch/index.html">
Architecture
</a>
<input class="toctree-checkbox" id="toctree-checkbox-95" name="toctree-checkbox-95" type="checkbox">
<label for="toctree-checkbox-95">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inf1-arch.html">
AWS Inf1 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/trn1-arch.html">
AWS Trn1/Trn1n Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inf2-arch.html">
AWS Inf2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inferentia.html">
Inferentia Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/inferentia2.html">
Inferentia2 Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/trainium.html">
Trainium Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-hardware/neuroncores-arch.html">
AWS NeuronCore Architecture
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/model-architecture-fit.html">
Neuron Model Architecture Fit Guidelines
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/glossary.html">
Neuron Glossary
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/arch/neuron-features/index.html">
Features
</a>
<input class="toctree-checkbox" id="toctree-checkbox-96" name="toctree-checkbox-96" type="checkbox">
<label for="toctree-checkbox-96">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/data-types.html">
Data Types
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/rounding-modes.html">
Rounding Modes
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/neuroncore-batching.html">
Neuron Batching
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/neuroncore-pipeline.html">
NeuronCore Pipeline
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/neuron-caching.html">
Neuron Persistent Cache
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/collective-communication.html">
Collective Communication
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/control-flow.html">
Neuron Control Flow
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/custom-c%2B%2B-operators.html">
Neuron Custom C++ Operators
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/arch/neuron-features/dynamic-shapes.html">
Neuron Dynamic Shapes
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/appnotes/index.html">
Application Notes
</a>
<input class="toctree-checkbox" id="toctree-checkbox-97" name="toctree-checkbox-97" type="checkbox">
<label for="toctree-checkbox-97">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/announcements/neuron2.x/neuron2-intro.html">
Introducing first release of Neuron 2.x enabling EC2 Trn1 general availability (GA)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/neuron1x/introducing-libnrt.html">
Introducing Neuron Runtime 2.x (libnrt.so)
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/perf/neuron-cc/performance-tuning.html">
Performance Tuning
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/perf/neuron-cc/parallel-ncgs.html">
Parallel Execution using NEURON_RT_NUM_CORES
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/torch-neuron/rcnn-app-note.html">
Running R-CNNs on Inf1
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/appnotes/transformers-neuronx/generative-llm-inference-with-neuron.html">
Generative LLM inference with Neuron
</a>
</li>
</ul>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/faq.html">
FAQ
</a>
</li>
<li class="toctree-l1">
<a class="reference internal" href="../../general/troubleshooting.html">
Troubleshooting
</a>
</li>
</ul>
<p aria-level="2" class="caption" role="heading">
<span class="caption-text">
About Neuron
</span>
</p>
<ul class="nav bd-sidenav">
<li class="toctree-l1">
<a class="reference internal" href="../../release-notes/release.html">
Release Details
</a>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/roadmap-readme.html">
Roadmap
</a>
<input class="toctree-checkbox" id="toctree-checkbox-98" name="toctree-checkbox-98" type="checkbox">
<label for="toctree-checkbox-98">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference external" href="https://github.com/orgs/aws-neuron/projects/1/views/1">
Neuron Public Roadmap
</a>
</li>
</ul>
</li>
<li class="toctree-l1 has-children">
<a class="reference internal" href="../../general/support.html">
Support
</a>
<input class="toctree-checkbox" id="toctree-checkbox-99" name="toctree-checkbox-99" type="checkbox">
<label for="toctree-checkbox-99">
<i class="fas fa-chevron-down">
</i>
</label>
<ul>
<li class="toctree-l2">
<a class="reference internal" href="../../general/sdk-policy.html">
SDK Maintenance Policy
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/security.html">
Security Disclosures
</a>
</li>
<li class="toctree-l2">
<a class="reference internal" href="../../general/contact.html">
Contact Us
</a>
</li>
</ul>
</li>
</ul>
</div>
</nav></div>
<div class="bd-sidebar__bottom">
<!-- To handle the deprecated key -->
<div class="navbar_extra_footer">
Theme by the <a href="https://ebp.jupyterbook.org">Executable Book Project</a>
</div>
</div>
</div>
<div id="rtd-footer-container"><!-- Inserted RTD Footer -->
<div class="injected">
<div class="rst-versions rst-badge" data-toggle="rst-versions">
<span class="rst-current-version" data-toggle="rst-current-version">
<span class="fa fa-book"> </span>
v: v2.14.1
<span class="fa fa-caret-down"></span>
</span>
<div class="rst-other-versions">
<dl>
<dt>Versions</dt>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/tools/tutorials/torch-neuronx-profiling-with-tb.html">latest</a>
</dd>
<dd class="rtd-current-item">
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.14.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.14.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.2/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.13.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.1/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.13.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.13.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.13.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.2/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.12.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.1/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.12.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.12.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.12.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.11.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.11.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.10.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.10.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.1/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.9.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.9.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.9.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.8.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.8.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.7.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.7.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.6.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.6.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.5.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.5.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.4.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.4.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v2.3.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v2.3.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.2/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.19.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.1/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.19.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.19.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.19.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.18.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.18.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.2/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.17.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.1/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.17.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.17.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.17.0</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.3/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.16.3</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.2/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.16.2</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.1/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.16.1</a>
</dd>
<dd>
<a href="https://awsdocs-neuron.readthedocs-hosted.com/en/v1.16.0/tools/tutorials/torch-neuronx-profiling-with-tb.html">v1.16.0</a>
</dd>
</dl>
<dl>
<dt>Downloads</dt>
<dd><a href="//awsdocs-neuron.readthedocs-hosted.com/_/downloads/en/v2.14.1/pdf/">PDF</a></dd>
</dl>
<dl>
<dt>On GitHub</dt>
<dd>
<a href="https://github.com/aws/aws-neuron-sdk/blob/v2.14.1//tools/tutorials/torch-neuronx-profiling-with-tb.rst">View</a>
</dd>
</dl>
<hr>
<div>
<div>
Documentation hosted by <a href="https://readthedocs.com">Read the Docs</a>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- A tiny helper pixel to detect if we've scrolled -->
<div class="sbt-scroll-pixel-helper"></div>
<!-- Main content -->
<div class="col py-0 content-container">
<div class="header-article row sticky-top noprint">
<div class="col py-1 d-flex header-article-main">
<div class="header-article__left">
<label for="__navigation" class="headerbtn" data-toggle="tooltip" data-placement="right" title="" data-original-title="Toggle navigation">
<span class="headerbtn__icon-container">
<i class="fas fa-bars"></i>
</span>
</label>
</div>
<div class="header-article__right">
<button onclick="toggleFullScreen()" class="headerbtn" data-toggle="tooltip" data-placement="bottom" title="" data-original-title="Fullscreen mode">
<span class="headerbtn__icon-container">
<i class="fas fa-expand"></i>
</span>
</button>
<div class="menu-dropdown menu-dropdown-repository-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Source repositories">
<i class="fab fa-github"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Source repository">
<span class="headerbtn__icon-container">
<i class="fab fa-github"></i>
</span>
<span class="headerbtn__text-container">repository</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/issues/new?title=Issue%20on%20page%20%2Ftools/tutorials/torch-neuronx-profiling-with-tb.html&body=Your%20issue%20content%20here." class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Open an issue">
<span class="headerbtn__icon-container">
<i class="fas fa-lightbulb"></i>
</span>
<span class="headerbtn__text-container">open issue</span>
</a>
</li>
<li>
<a href="https://github.com/aws-neuron/aws-neuron-sdk/edit/v2.14.1/tools/tutorials/torch-neuronx-profiling-with-tb.rst" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Edit this page">
<span class="headerbtn__icon-container">
<i class="fas fa-pencil-alt"></i>
</span>
<span class="headerbtn__text-container">suggest edit</span>
</a>
</li>
</ul>
</div>
</div>
<div class="menu-dropdown menu-dropdown-download-buttons">
<button class="headerbtn menu-dropdown__trigger" aria-label="Download this page">
<i class="fas fa-download"></i>
</button>
<div class="menu-dropdown__content">
<ul>
<li>
<a href="../../_sources/tools/tutorials/torch-neuronx-profiling-with-tb.rst.txt" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Download source file">
<span class="headerbtn__icon-container">
<i class="fas fa-file"></i>
</span>
<span class="headerbtn__text-container">.rst</span>
</a>
</li>
<li>
<button onclick="printPdf(this)" class="headerbtn" data-toggle="tooltip" data-placement="left" title="" data-original-title="Print to PDF">
<span class="headerbtn__icon-container">
<i class="fas fa-file-pdf"></i>
</span>
<span class="headerbtn__text-container">.pdf</span>
</button>
</li>
</ul>
</div>
</div>
<label for="__page-toc" class="headerbtn headerbtn-page-toc">
<span class="headerbtn__icon-container">
<i class="fas fa-list"></i>
</span>
</label>
</div>
</div>
<!-- Table of contents -->
<div class="col-md-3 bd-toc show noprint">
<div class="tocsection onthispage pt-5 pb-3">
<i class="fas fa-list"></i> Contents
</div>
<nav id="bd-toc-nav" aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup">
Setup
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#prerequisites">
Prerequisites
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#environment">
Environment
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#part-1-operator-level-trace-for-xm-markstep-workflow">
Part 1: Operator Level Trace for
<code class="docutils literal notranslate">
<span class="pre">
xm.markstep()
</span>
</code>
workflow
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#goal">
Goal
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#set-up">
Set Up
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#understanding-the-code">
Understanding the Code
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#running-the-profiler">
Running The Profiler
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#loading-the-operators-level-trace-in-tensorboard">
Loading the Operators Level Trace in TensorBoard
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#operator-framework-view">
Operator Framework View
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#operator-hlo-view">
Operator HLO View
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#operator-trace-view">
Operator Trace View
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#trace-view-sections">
Trace View Sections
</a>
</li>
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#fusion-operators">
Fusion Operators
</a>
</li>
</ul>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#understanding-the-low-level-timeline">
Understanding the Low Level Timeline
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#analysis">
Analysis
</a>
</li>
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#conclusion">
Conclusion
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#part-2-operator-level-trace-with-torch-neuronx-trace-workflow">
Part 2: Operator Level Trace with
<code class="docutils literal notranslate">
<span class="pre">
torch_neuronx.trace()
</span>
</code>
workflow
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id2">
Set Up
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#important-code-differences-from-part-1">
Important code differences from Part 1
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#running-part-2">
Running Part 2
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id3">
Loading the Operators Level Trace in TensorBoard
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#notable-differences-in-timeline-view-from-part-1">
Notable Differences in Timeline View from Part 1:
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
<div class="article row">
<div class="col pl-md-3 pl-lg-5 content-container">
<!-- Table of contents that is only displayed when printing the page -->
<div id="jb-print-docs-body" class="onlyprint">
<h1>Profiling PyTorch Neuron (torch-neuronx) with TensorBoard</h1>
<!-- Table of contents -->
<div id="print-main-content">
<div id="jb-print-toc">
<div>
<h2> Contents </h2>
</div>
<nav aria-label="Page">
<ul class="visible nav section-nav flex-column">
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#introduction">
Introduction
</a>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#setup">
Setup
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#prerequisites">
Prerequisites
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#environment">
Environment
</a>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#part-1-operator-level-trace-for-xm-markstep-workflow">
Part 1: Operator Level Trace for
<code class="docutils literal notranslate">
<span class="pre">
xm.markstep()
</span>
</code>
workflow
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#goal">
Goal
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#set-up">
Set Up
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#understanding-the-code">
Understanding the Code
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#running-the-profiler">
Running The Profiler
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#loading-the-operators-level-trace-in-tensorboard">
Loading the Operators Level Trace in TensorBoard
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#operator-framework-view">
Operator Framework View
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#operator-hlo-view">
Operator HLO View
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#operator-trace-view">
Operator Trace View
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#trace-view-sections">
Trace View Sections
</a>
</li>
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#fusion-operators">
Fusion Operators
</a>
</li>
</ul>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#understanding-the-low-level-timeline">
Understanding the Low Level Timeline
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#analysis">
Analysis
</a>
</li>
<li class="toc-h4 nav-item toc-entry">
<a class="reference internal nav-link" href="#conclusion">
Conclusion
</a>
</li>
</ul>
</li>
</ul>
</li>
<li class="toc-h2 nav-item toc-entry">
<a class="reference internal nav-link" href="#part-2-operator-level-trace-with-torch-neuronx-trace-workflow">
Part 2: Operator Level Trace with
<code class="docutils literal notranslate">
<span class="pre">
torch_neuronx.trace()
</span>
</code>
workflow
</a>
<ul class="nav section-nav flex-column">
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id2">
Set Up
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#important-code-differences-from-part-1">
Important code differences from Part 1
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#running-part-2">
Running Part 2
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#id3">
Loading the Operators Level Trace in TensorBoard
</a>
</li>
<li class="toc-h3 nav-item toc-entry">
<a class="reference internal nav-link" href="#notable-differences-in-timeline-view-from-part-1">
Notable Differences in Timeline View from Part 1:
</a>
</li>
</ul>
</li>
</ul>
</nav>
</div>
</div>
</div>
<main id="main-content" role="main">
<div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
<div class="section" id="profiling-pytorch-neuron-torch-neuronx-with-tensorboard">
<span id="torch-neuronx-profiling-with-tb"></span><h1>Profiling PyTorch Neuron (<code class="docutils literal notranslate"><span class="pre">torch-neuronx</span></code>) with TensorBoard<a class="headerlink" href="#profiling-pytorch-neuron-torch-neuronx-with-tensorboard" title="Permalink to this headline">#</a></h1>
<div class="contents local topic" id="table-of-contents">
<p class="topic-title">Table of Contents</p>
<ul class="simple">
<li><p><a class="reference internal" href="#introduction" id="id4">Introduction</a></p></li>
<li><p><a class="reference internal" href="#setup" id="id5">Setup</a></p>
<ul>
<li><p><a class="reference internal" href="#prerequisites" id="id6">Prerequisites</a></p></li>
<li><p><a class="reference internal" href="#environment" id="id7">Environment</a></p></li>
</ul>
</li>
<li><p><a class="reference internal" href="#part-1-operator-level-trace-for-xm-markstep-workflow" id="id8">Part 1: Operator Level Trace for <code class="docutils literal notranslate"><span class="pre">xm.markstep()</span></code> workflow</a></p>
<ul>
<li><p><a class="reference internal" href="#goal" id="id9">Goal</a></p></li>
<li><p><a class="reference internal" href="#set-up" id="id10">Set Up</a></p></li>
<li><p><a class="reference internal" href="#understanding-the-code" id="id11">Understanding the Code</a></p></li>
<li><p><a class="reference internal" href="#running-the-profiler" id="id12">Running The Profiler</a></p></li>
<li><p><a class="reference internal" href="#loading-the-operators-level-trace-in-tensorboard" id="id13">Loading the Operators Level Trace in TensorBoard</a></p></li>
<li><p><a class="reference internal" href="#operator-framework-view" id="id14">Operator Framework View</a></p></li>
<li><p><a class="reference internal" href="#operator-hlo-view" id="id15">Operator HLO View</a></p></li>
<li><p><a class="reference internal" href="#operator-trace-view" id="id16">Operator Trace View</a></p></li>
<li><p><a class="reference internal" href="#understanding-the-low-level-timeline" id="id17">Understanding the Low Level Timeline</a></p></li>
</ul>
</li>
<li><p><a class="reference internal" href="#part-2-operator-level-trace-with-torch-neuronx-trace-workflow" id="id18">Part 2: Operator Level Trace with <code class="docutils literal notranslate"><span class="pre">torch_neuronx.trace()</span></code> workflow</a></p>
<ul>
<li><p><a class="reference internal" href="#id2" id="id19">Set Up</a></p></li>
<li><p><a class="reference internal" href="#important-code-differences-from-part-1" id="id20">Important code differences from Part 1</a></p></li>
<li><p><a class="reference internal" href="#running-part-2" id="id21">Running Part 2</a></p></li>
<li><p><a class="reference internal" href="#id3" id="id22">Loading the Operators Level Trace in TensorBoard</a></p></li>
<li><p><a class="reference internal" href="#notable-differences-in-timeline-view-from-part-1" id="id23">Notable Differences in Timeline View from Part 1:</a></p></li>
</ul>
</li>
</ul>
</div>
<div class="section" id="introduction">
<h2><a class="toc-backref" href="#id4">Introduction</a><a class="headerlink" href="#introduction" title="Permalink to this headline">#</a></h2>
<p>Neuron provides a plugin for TensorBoard that allows users to measure and visualize
performance on a torch runtime level or an operator
level. With this information, it becomes quicker to identify any
performance bottleneck allowing for quicker addressing of that issue.</p>
<p>For more information on the Neuron plugin for TensorBoard, see <a class="reference internal" href="../tensorboard/getting-started-tensorboard-neuronx-plugin.html#neuronx-plugin-tensorboard"><span class="std std-ref">Neuron Plugin for TensorBoard (Trn1)</span></a>.</p>
</div>
<div class="section" id="setup">
<h2><a class="toc-backref" href="#id5">Setup</a><a class="headerlink" href="#setup" title="Permalink to this headline">#</a></h2>
<div class="section" id="prerequisites">
<h3><a class="toc-backref" href="#id6">Prerequisites</a><a class="headerlink" href="#prerequisites" title="Permalink to this headline">#</a></h3>
<ol class="arabic simple">
<li><p>Initial <a class="reference external" href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/setup/pytorch-install.html">Trn1 setup for PyTorch
(torch-neuronx)</a>
has been done</p></li>
</ol>
</div>
<div class="section" id="environment">
<h3><a class="toc-backref" href="#id7">Environment</a><a class="headerlink" href="#environment" title="Permalink to this headline">#</a></h3>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="c1">#activate python virtual environment and install tensorboard_plugin_neuron</span>
<span class="n">source</span> <span class="o">~/</span><span class="n">aws_neuron_venv_pytorch_p38</span><span class="o">/</span><span class="nb">bin</span><span class="o">/</span><span class="n">activate</span>
<span class="n">pip</span> <span class="n">install</span> <span class="n">tensorboard_plugin_neuronx</span>
<span class="c1">#create work directory for the Neuron Profiling tutorials</span>
<span class="n">mkdir</span> <span class="o">-</span><span class="n">p</span> <span class="o">~/</span><span class="n">neuron_profiling_tensorboard_examples</span>
<span class="n">cd</span> <span class="o">~/</span><span class="n">neuron_profiling_tensorboard_examples</span>
</pre></div>
</div>
</div>
</div>
<div class="section" id="part-1-operator-level-trace-for-xm-markstep-workflow">
<h2><a class="toc-backref" href="#id8">Part 1: Operator Level Trace for <code class="docutils literal notranslate"><span class="pre">xm.markstep()</span></code> workflow</a><a class="headerlink" href="#part-1-operator-level-trace-for-xm-markstep-workflow" title="Permalink to this headline">#</a></h2>
<div class="section" id="goal">
<h3><a class="toc-backref" href="#id9">Goal</a><a class="headerlink" href="#goal" title="Permalink to this headline">#</a></h3>
<p>After completing this tutorial, the user should be able to understand
the features of the Operator Level Trace. The user should also be able
to form a narrative/surface level analysis from what is being presented
in the Operator Level Trace.</p>
</div>
<div class="section" id="set-up">
<h3><a class="toc-backref" href="#id10">Set Up</a><a class="headerlink" href="#set-up" title="Permalink to this headline">#</a></h3>
<p>Let’s set up a directory containing the material for this demo</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">cd</span> <span class="o">~/</span><span class="n">neuron_profiling_tensorboard_examples</span>
<span class="n">mkdir</span> <span class="n">tutorial_1</span>
<span class="n">cd</span> <span class="n">tutorial_1</span>
<span class="c1"># this is where our code will be written</span>
<span class="n">touch</span> <span class="n">run</span><span class="o">.</span><span class="n">py</span>
</pre></div>
</div>
<p>Here is the code for <code class="docutils literal notranslate"><span class="pre">run.py</span></code>:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_neuronx</span>
<span class="kn">from</span> <span class="nn">torch_neuronx.experimental</span> <span class="kn">import</span> <span class="n">profiler</span>
<span class="kn">import</span> <span class="nn">torch_xla.core.xla_model</span> <span class="k">as</span> <span class="nn">xm</span>
<span class="n">os</span><span class="o">.</span><span class="n">environ</span><span class="p">[</span><span class="s2">"NEURON_CC_FLAGS"</span><span class="p">]</span> <span class="o">=</span> <span class="s2">"--cache_dir=./compiler_cache"</span>
<span class="n">device</span> <span class="o">=</span> <span class="n">xm</span><span class="o">.</span><span class="n">xla_device</span><span class="p">()</span>
<span class="k">class</span> <span class="nc">NN</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">layer1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">nl1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">ReLU</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">layer2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">nl2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Tanh</span><span class="p">()</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">nl1</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">layer1</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">nl2</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">layer2</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="k">with</span> <span class="n">torch</span><span class="o">.</span><span class="n">no_grad</span><span class="p">():</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">NN</span><span class="p">()</span>
<span class="n">inp</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">inp</span><span class="p">)</span>
<span class="k">with</span> <span class="n">torch_neuronx</span><span class="o">.</span><span class="n">experimental</span><span class="o">.</span><span class="n">profiler</span><span class="o">.</span><span class="n">profile</span><span class="p">(</span>
<span class="n">port</span><span class="o">=</span><span class="mi">9012</span><span class="p">,</span>
<span class="n">profile_type</span><span class="o">=</span><span class="s1">'operator'</span><span class="p">,</span>
<span class="n">ms_duration</span><span class="o">=</span><span class="mi">10000</span> <span class="p">):</span>
<span class="c1"># IMPORTANT: the model has to be transferred to XLA within</span>
<span class="c1"># the context manager, otherwise profiling won't work</span>
<span class="n">neuron_model</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="n">neuron_inp</span> <span class="o">=</span> <span class="n">inp</span><span class="o">.</span><span class="n">to</span><span class="p">(</span><span class="n">device</span><span class="p">)</span>
<span class="n">output_neuron</span> <span class="o">=</span> <span class="n">neuron_model</span><span class="p">(</span><span class="n">neuron_inp</span><span class="p">)</span>
<span class="n">xm</span><span class="o">.</span><span class="n">mark_step</span><span class="p">()</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"==CPU OUTPUT=="</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">output</span><span class="p">)</span>
<span class="nb">print</span><span class="p">()</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"==TRN1 OUTPUT=="</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">output_neuron</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="understanding-the-code">
<h3><a class="toc-backref" href="#id11">Understanding the Code</a><a class="headerlink" href="#understanding-the-code" title="Permalink to this headline">#</a></h3>
<p>For this first tutorial, we’ll be using a simple Feed forward NN model.
However, once the TensorBoard dashboard is up, we’ll see some
interesting and unexpected things. A simple model is helpful since it is
easy to reference back to.</p>
<p>Another important part is the “operator” profiling type we specified in the context manager.</p>
<p><strong>Low Level:</strong> The “operator“ dashboard is the dashboard that contains
the Operator Level Trace This view also only zooms in on the
NeuronDevice, while the ”trace“ dashboard shows processes from all
devices. The Operator Level Trace View is organized by levels of
abstraction, with the top level showing the model class. The next lower
tier shows model components, and the lowest tier shows specific
operators that occur for a specific model component. This view is useful
for identifying model bottlenecks at the operator level.</p>
<p>We also print out the outputs from the CPU model and the TRN1 model to note
the small differences in output.</p>
</div>
<div class="section" id="running-the-profiler">
<h3><a class="toc-backref" href="#id12">Running The Profiler</a><a class="headerlink" href="#running-the-profiler" title="Permalink to this headline">#</a></h3>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">python</span> <span class="n">run</span><span class="o">.</span><span class="n">py</span>
</pre></div>
</div>
<p><strong>Output:</strong></p>
<p>Initial Output & Compilation Success</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span>0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Analyzing dependencies of Block1
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Dependency reduction of sg0000
0% 10 20 30 40 50 60 70 80 90 100%``
|----|----|----|----|----|----|----|----|----|----|
***************************************************
</pre></div>
</div>
<p>Processing the Neuron Profiler Traces</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">torch_neuron</span><span class="p">:</span> <span class="n">Waiting</span> <span class="k">for</span> <span class="n">XLA</span> <span class="n">profile</span> <span class="n">completion</span> <span class="o">...</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Processing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:CPU'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">XLA</span> <span class="n">decode</span> <span class="o">-</span> <span class="n">Read</span> <span class="n">filename</span> <span class="mi">2023_04_28_00_54_04</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">XLA</span> <span class="n">decode</span> <span class="o">-</span> <span class="n">Read</span> <span class="n">date</span> <span class="n">parts</span> <span class="p">[</span><span class="s1">'2023'</span><span class="p">,</span> <span class="s1">'04'</span><span class="p">,</span> <span class="s1">'28'</span><span class="p">,</span> <span class="s1">'00'</span><span class="p">,</span> <span class="s1">'54'</span><span class="p">,</span> <span class="s1">'04'</span><span class="p">]</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">XLA</span> <span class="n">decode</span> <span class="o">-</span> <span class="n">Read</span> <span class="n">start</span> <span class="n">date</span> <span class="mi">2023</span><span class="o">-</span><span class="mi">04</span><span class="o">-</span><span class="mi">28</span> <span class="mi">00</span><span class="p">:</span><span class="mi">54</span><span class="p">:</span><span class="mi">04</span> <span class="kn">from</span> <span class="nn">directory</span> <span class="n">stamp</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Processing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Writing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline_split.json'</span> <span class="n">to</span> <span class="s1">'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline_split.json'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Processing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Writing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_op_timeline.json'</span> <span class="n">to</span> <span class="s1">'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_op_timeline.json'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Processing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Writing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_hlo_op.json'</span> <span class="n">to</span> <span class="s1">'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_hlo_op.json'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Processing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json'</span>
<span class="n">torch_neuron</span><span class="p">:</span> <span class="n">translate_xplane</span><span class="p">:</span> <span class="n">Writing</span> <span class="n">plane</span><span class="p">:</span> <span class="s1">'/host:Neuron-runtime:profile//c1a992f0ea378f7a_1/model10001/node5/plugins/neuron/1682643254/neuron_framework_op.json'</span> <span class="n">to</span> <span class="s1">'temp_profiler_logs/c1a992f0ea378f7a_1/neuron_framework_op.json'</span>
</pre></div>
</div>
<p>Printing output from CPU model and Trn1 Model:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">==</span><span class="n">CPU</span> <span class="n">OUTPUT</span><span class="o">==</span>
<span class="n">tensor</span><span class="p">([[</span><span class="o">-</span><span class="mf">0.1396</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3266</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">0.0327</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3105</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">0.0073</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3268</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">0.1683</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3230</span><span class="p">]])</span>
<span class="o">==</span><span class="n">TRN1</span> <span class="n">OUTPUT</span><span class="o">==</span>
<span class="n">tensor</span><span class="p">([[</span><span class="o">-</span><span class="mf">0.1396</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3266</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">0.0328</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3106</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">0.0067</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3270</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">0.1684</span><span class="p">,</span> <span class="o">-</span><span class="mf">0.3229</span><span class="p">]],</span> <span class="n">device</span><span class="o">=</span><span class="s1">'xla:1'</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="loading-the-operators-level-trace-in-tensorboard">
<h3><a class="toc-backref" href="#id13">Loading the Operators Level Trace in TensorBoard</a><a class="headerlink" href="#loading-the-operators-level-trace-in-tensorboard" title="Permalink to this headline">#</a></h3>
<p>Run <code class="docutils literal notranslate"><span class="pre">tensorboard</span> <span class="pre">--load_fast=false</span> <span class="pre">--logdir</span> <span class="pre">logs/</span></code></p>
<p>Take note of the port (usually 6006) and enter <code class="docutils literal notranslate"><span class="pre">localhost:<port></span></code> into
the local browser (assuming port forwarding is set up properly)</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Check <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html#tensorboard-interface-overview"><span class="std std-ref">Viewing the Trace on TensorBoard</span></a> to understand TensorBoard interface</p>
</div>
<p>The Operator Level Trace views are the same format plus an id at the
end; <code class="docutils literal notranslate"><span class="pre">year_month_day_hour_minute_second_millisecond_id</span></code>. The Tool
dropdown will have 3 options: operator-framework, operator-hlo, and
operator-timeline.</p>
</div>
<div class="section" id="operator-framework-view">
<h3><a class="toc-backref" href="#id14">Operator Framework View</a><a class="headerlink" href="#operator-framework-view" title="Permalink to this headline">#</a></h3>
<p><img alt="tensorboard-operator-framework-view" src="../../_images/Neuron_Profiler_T1_Op_Framework_View.png"></p>
<p>This view contains a pie-chart displaying the
proportional execution time for each of the model operators on the framework level for a
neuron device. The list of operators is shown in the bottom along with
other details about number of occurrences, execution time and neuron
device and core.</p>
</div>
<div class="section" id="operator-hlo-view">
<h3><a class="toc-backref" href="#id15">Operator HLO View</a><a class="headerlink" href="#operator-hlo-view" title="Permalink to this headline">#</a></h3>
<p><img alt="tensorboard-operator-hlo-view" src="../../_images/Neuron_Profiler_T1_Op_HLO_View.png"></p>
<p>This view contains a pie-chart displaying the
proportional execution time for each of the model operators on the hlo level for a
Neuron device. The list of operators is shown in the bottom along with
other details about number of occurrences, execution time and neuron
device and core.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>For this simple model, the pie chart will be the same as the framework view. This won’t be
the case for larger and more complex models.</p>
</div>
</div>
<div class="section" id="operator-trace-view">
<h3><a class="toc-backref" href="#id16">Operator Trace View</a><a class="headerlink" href="#operator-trace-view" title="Permalink to this headline">#</a></h3>
<p><img alt="tensorboard-operator-trace-view" src="../../_images/Neuron_Profiler_T1_Op_Trace_View.png"></p>
<div class="section" id="trace-view-sections">
<span id="id1"></span><h4>Trace View Sections<a class="headerlink" href="#trace-view-sections" title="Permalink to this headline">#</a></h4>
<p>Notice there are four sections: Process Overview, Control, Execution, and Data
Transfer. In each section there are more subdivisions with each layer
representing a certain level of abstraction. Also important to note that
the timescale axis is aligned between the two sections. This is
important to note as sometimes there are gaps in the process execution.
Most of the time, there are data transfer operations happening in
between the gaps.</p>
</div>
<div class="section" id="fusion-operators">
<h4>Fusion Operators<a class="headerlink" href="#fusion-operators" title="Permalink to this headline">#</a></h4>
<p><strong>Simple Case:</strong> Zooming in on the operations, we can recognize some
operations for a neural network, such as a dot product and transpose,
but sometimes there will be fused operators (fusion operators). To
understand these operators, click on it, and on the bottom of the
dashboard, some information will appear.</p>
<p><img alt="tensorboard-operator-trace-fusion-simple" src="../../_images/Neuron_Profiler_T1_Op_Trace_Fusion_Simple.png"></p>
<p>Notice in the above example the fusion operator is fusing the operator before and
after itself on the timeline. More specifically, <code class="docutils literal notranslate"><span class="pre">fused_3</span></code> is a fusion
of <code class="docutils literal notranslate"><span class="pre">NN[model]/input</span></code> and
<code class="docutils literal notranslate"><span class="pre">NN[model]/ReLU[nl1]/Tensor_1/aten__relu_maximum</span></code>. These kinds of
fusions occur when the <code class="docutils literal notranslate"><span class="pre">neuronx-cc</span></code> compiler has found an optimization
relating to the two operators. Most often this would be the execution of
the operators on separate compute engines or another form of parallelism.</p>
<p><strong>Complex Case:</strong> Most often, the order of fusion operators can get a
little complicated or contain “hidden” information. For the first example,
let’s zoom into the data transfer section such that we see the timescale range
from 6000 ns. to 6600 ns. It should look similar to below:</p>
<p><img alt="tensorboard-operator-trace-fusion-complex" src="../../_images/Neuron_Profiler_T1_Op_Trace_Fusion_Complex.png"></p>
<p>Looking at <code class="docutils literal notranslate"><span class="pre">fused_16</span></code> (11452 ns) we see it’s surrounded by other fused operators.
Furthermore, the <code class="docutils literal notranslate"><span class="pre">fused_16</span></code> operator fuses more than two operators: <code class="docutils literal notranslate"><span class="pre">NN[model]/Linear[layer1]/aten__addmm_add</span></code>,
<code class="docutils literal notranslate"><span class="pre">NN[model]/input</span></code>, and <code class="docutils literal notranslate"><span class="pre">NN[model]/Linear[layer1]/aten__addmm_dot</span></code>. These operators can be found in the timeline, but sometimes
the fused operators may not exist in the timeline due to it occurring within another operation. We go over an example of this case
in Part 2.</p>
</div>
</div>
<div class="section" id="understanding-the-low-level-timeline">
<h3><a class="toc-backref" href="#id17">Understanding the Low Level Timeline</a><a class="headerlink" href="#understanding-the-low-level-timeline" title="Permalink to this headline">#</a></h3>
<p>Looking at the trace we can look behind the scenes at how the model is
executed on neuron hardware. Before proceeding with the analysis, it is worth recalling the
way we defined the model for this tutorial:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="k">class</span> <span class="nc">NN</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">layer1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">nl1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">ReLU</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">layer2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">nl2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Tanh</span><span class="p">()</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">nl1</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">layer1</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">nl2</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">layer2</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
</pre></div>
</div>
<div class="section" id="analysis">
<h4>Analysis<a class="headerlink" href="#analysis" title="Permalink to this headline">#</a></h4>
<p><strong>Input Operators:</strong> We see input operators here. This is because in a markstep flow, we need to transfer inputs to the xla device. This is represented by the <code class="docutils literal notranslate"><span class="pre">SyncTensorsGraph.53</span></code> call.</p>
<p><strong>ReLU at the beginning:</strong> The first couple of blocks in the Process Data Transfer section initially appear to be confusing. There is an <code class="docutils literal notranslate"><span class="pre">Input</span></code> (0 ns.)
block followed by a <code class="docutils literal notranslate"><span class="pre">ReLU</span></code> (100 ns.) operator. Under the hood here, <code class="docutils literal notranslate"><span class="pre">ReLU</span></code> is rewritten as an <code class="docutils literal notranslate"><span class="pre">elementwise_max(arr,0)</span></code>,
(0 here means an array with zeros) but to create this operation, the zeros have to be set in memory, which is a data operation.
A general rule is that if an operator appears this early in the data transfer section, it most likely means there is an operation
lowering involving setting some values into memory for use later on.</p>
<p><strong>Memory allocation for Linear[layer1]:</strong> We resume with the data transfer operations. Here, memory is getting allocated for specific operators, and sometimes the allocated
inputs get loaded onto operators while the rest of the input gets allocated. This can be seen at <code class="docutils literal notranslate"><span class="pre">fused_18</span></code> (11811 ns.) and <code class="docutils literal notranslate"><span class="pre">fused_23</span></code> (12181 ns.).
Eventually the input gets fully allocated, and other allocations occur for dot products, transpose, and broadcast operators for
<code class="docutils literal notranslate"><span class="pre">Linear[layer1]</span></code> and <code class="docutils literal notranslate"><span class="pre">Linear[layer2]</span></code>.</p>
</div>
<div class="section" id="conclusion">
<h4>Conclusion<a class="headerlink" href="#conclusion" title="Permalink to this headline">#</a></h4>
<p>There are a few conclusions that can be determined from analyzing the timeline. We can see that we’ve been able to save a bit of time due to
parallelism with fusion operations, and saving some compute time with preloading operations (ex. <code class="docutils literal notranslate"><span class="pre">ReLU</span></code>). A clear trend is that a majority of the time is spent on data transfer operations.
It is also evident that even a simple Feed Forward NN becomes complicated when put under a microscope in the profiler. Facts such as the implementation of <code class="docutils literal notranslate"><span class="pre">ReLU</span></code> in the runtime/architecture, aren’t explicitly stated in the profiler, but do make
themselves known by the unusual ordering placement of the trace blocks and unusual fusion operators.</p>
<p>In terms of action items that can be taken based on our narrative, there
really isn’t any. This is a very very simple model that outputs after 8
microseconds, and we chose it because it is simple to understand. In
more realistic examples we will aim to do more compute than data
transfer on the hardware, and where possible to overlap data transfer
and compute between sequential operations.</p>
<p>The profiler revealed a lot of optimizations that were done, via fusion
operators and parallelism. However, the end goal of this tool is to be
able to improve performance by revealing the bottlenecks of the model.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>While we did explain some of the quirks visible in the profiler at a microscopic level, it isn’t necessary
to do so for normal use. This tutorial introduced the microscopic explanation for these occurrences to show to the
user that this is <em>indeed</em> what happens in the hardware when executing a simple FFNN.</p>
</div>
</div>
</div>
</div>
<div class="section" id="part-2-operator-level-trace-with-torch-neuronx-trace-workflow">
<h2><a class="toc-backref" href="#id18">Part 2: Operator Level Trace with <code class="docutils literal notranslate"><span class="pre">torch_neuronx.trace()</span></code> workflow</a><a class="headerlink" href="#part-2-operator-level-trace-with-torch-neuronx-trace-workflow" title="Permalink to this headline">#</a></h2>
<div class="section" id="id2">
<h3><a class="toc-backref" href="#id19">Set Up</a><a class="headerlink" href="#id2" title="Permalink to this headline">#</a></h3>
<p>The setup will be similar to Part 1.</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">cd</span> <span class="o">~/</span><span class="n">neuron_profiling_tensorboard_examples</span>
<span class="n">mkdir</span> <span class="n">tutorial_2</span>
<span class="n">cd</span> <span class="n">tutorial_2</span>
<span class="c1"># this is where our code will be written</span>
<span class="n">touch</span> <span class="n">run</span><span class="o">.</span><span class="n">py</span>
</pre></div>
</div>
<p>Here is the code for <code class="docutils literal notranslate"><span class="pre">run.py</span></code>:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">import</span> <span class="nn">time</span>
<span class="kn">import</span> <span class="nn">torch</span>
<span class="kn">import</span> <span class="nn">torch_neuronx</span>
<span class="kn">from</span> <span class="nn">torch_neuronx.experimental</span> <span class="kn">import</span> <span class="n">profiler</span>
<span class="k">class</span> <span class="nc">NN</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Module</span><span class="p">):</span>
<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">):</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">layer1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">nl1</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">ReLU</span><span class="p">()</span>
<span class="bp">self</span><span class="o">.</span><span class="n">layer2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Linear</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">nl2</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">Tanh</span><span class="p">()</span>
<span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">x</span><span class="p">):</span>
<span class="n">x</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">nl1</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">layer1</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">nl2</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">layer2</span><span class="p">(</span><span class="n">x</span><span class="p">))</span>
<span class="n">model</span> <span class="o">=</span> <span class="n">NN</span><span class="p">()</span>
<span class="n">model</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
<span class="n">inp</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">rand</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span><span class="mi">4</span><span class="p">)</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">model</span><span class="p">(</span><span class="n">inp</span><span class="p">)</span>
<span class="k">with</span> <span class="n">torch_neuronx</span><span class="o">.</span><span class="n">experimental</span><span class="o">.</span><span class="n">profiler</span><span class="o">.</span><span class="n">profile</span><span class="p">(</span>
<span class="n">port</span><span class="o">=</span><span class="mi">9012</span><span class="p">,</span>
<span class="n">profile_type</span><span class="o">=</span><span class="s1">'operator'</span><span class="p">,</span>
<span class="n">ms_duration</span><span class="o">=</span><span class="mi">10000</span><span class="p">,</span>
<span class="n">traced_only</span><span class="o">=</span><span class="kc">True</span><span class="p">):</span>
<span class="n">neuron_model</span> <span class="o">=</span> <span class="n">torch_neuronx</span><span class="o">.</span><span class="n">trace</span><span class="p">(</span><span class="n">model</span><span class="p">,</span><span class="n">inp</span><span class="p">,</span><span class="n">compiler_workdir</span><span class="o">=</span><span class="s2">"./compiler_cache"</span><span class="p">)</span>
<span class="n">neuron_model</span><span class="p">(</span><span class="n">inp</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"==CPU OUTPUT=="</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">output</span><span class="p">)</span>
<span class="nb">print</span><span class="p">()</span>
<span class="nb">print</span><span class="p">(</span><span class="s2">"==INF2 OUTPUT=="</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">output_neuron</span><span class="p">)</span>
</pre></div>
</div>
</div>
<div class="section" id="important-code-differences-from-part-1">
<h3><a class="toc-backref" href="#id20">Important code differences from Part 1</a><a class="headerlink" href="#important-code-differences-from-part-1" title="Permalink to this headline">#</a></h3>
<ol class="arabic simple">
<li><p><code class="docutils literal notranslate"><span class="pre">import</span> <span class="pre">torch_xla.core.xla_model</span> <span class="pre">as</span> <span class="pre">xm</span></code> is no longer necessary</p></li>
<li><p>Set <code class="docutils literal notranslate"><span class="pre">traced_only=True</span></code> in <code class="docutils literal notranslate"><span class="pre">torch_neuronx.experimental.profiler.profile()</span></code>. This option is necessary for traced models, otherwise the generated profile will not be accurate or not work.</p></li>
<li><p>Tracing the model with <code class="docutils literal notranslate"><span class="pre">torch_neuronx.trace()</span></code> and removing <code class="docutils literal notranslate"><span class="pre">xm.markstep()</span></code>.</p></li>
</ol>
<p>Otherwise, the code is the same as Part 1.</p>
</div>
<div class="section" id="running-part-2">
<h3><a class="toc-backref" href="#id21">Running Part 2</a><a class="headerlink" href="#running-part-2" title="Permalink to this headline">#</a></h3>
<p>To Run:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">python</span> <span class="n">run</span><span class="o">.</span><span class="n">py</span>
</pre></div>
</div>
<p>The output will look almost identical as Part 1</p>
</div>
<div class="section" id="id3">
<h3><a class="toc-backref" href="#id22">Loading the Operators Level Trace in TensorBoard</a><a class="headerlink" href="#id3" title="Permalink to this headline">#</a></h3>
<p>Run <code class="docutils literal notranslate"><span class="pre">tensorboard</span> <span class="pre">--load_fast=false</span> <span class="pre">--logdir</span> <span class="pre">logs/</span></code>, just like Part 1.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Check <a class="reference internal" href="../../frameworks/torch/torch-neuronx/programming-guide/torch-neuronx-profiling-dev-guide.html#tensorboard-interface-overview"><span class="std std-ref">Viewing the Trace on TensorBoard</span></a> to understand TensorBoard interface</p>
</div>
<p>Timeline View:</p>
<p><img alt="tensorboard-operator-trace-view-traced" src="../../_images/Neuron_Profiler_T1_Op_Trace_View_Traced.png"></p>
</div>
<div class="section" id="notable-differences-in-timeline-view-from-part-1">
<h3><a class="toc-backref" href="#id23">Notable Differences in Timeline View from Part 1:</a><a class="headerlink" href="#notable-differences-in-timeline-view-from-part-1" title="Permalink to this headline">#</a></h3>
<p><strong>No Input Operators:</strong> For a traced model, we do not transfer the input to an xla device, so these operations are not seen on the timeline. This also affects scheduling, which is why the time taken in
the profiling is less than the markstep one.</p>
<p><strong>Combined Loading of Linear[layer1] and Tanh:</strong> <code class="docutils literal notranslate"><span class="pre">fused_19</span></code> (5824 ns) contains a fusion between <code class="docutils literal notranslate"><span class="pre">Linear[layer1]</span></code> and <code class="docutils literal notranslate"><span class="pre">Tanh[nl2]</span></code>. This might be a bit odd, but such data loading parallelism
can be understood by understanding how tanh is implemented. Typically, functions like tanh are implemented by lookup tables that require being pre-loaded onto memory, which is a data transfer operation.
A bulk of data transfer operations are done in the beginning to optimize computations.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Despite these differences, the big picture conclusion drawn from Part 1 still holds, as the two timelines are more similar than different. Some new insights drawn is that the traced model performs better than the markstep flow, since this was profiling a single forward pass.</p>
</div>
<p><em>This document is relevant for</em>: <code class="docutils literal notranslate"><span class="pre">Inf1</span></code>, <code class="docutils literal notranslate"><span class="pre">Inf2</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1</span></code>, <code class="docutils literal notranslate"><span class="pre">Trn1n</span></code></p>
</div>
</div>
</div>
<div class="section">
</div>
</div>
</main>
<footer class="footer-article noprint">
<!-- Previous / next buttons -->
<div class="prev-next-area">
</div>
</footer>
</div>
</div>
<div class="footer-content row">
<footer class="col footer"><p>
By AWS<br>
© Copyright 2023, Amazon.com.<br>
</p>
</footer>
</div>
</div>
</div>
</div>
<!-- Scripts loaded after <body> so the DOM is not blocked -->
<script src="../../_static/scripts/pydata-sphinx-theme.js?digest=1999514e3f237ded88cf"></script>
</body></html>
|
2023-09-29T20:55:01.801Z
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_terminology.rst.txt
|
```
.. _neuronperf_terminology:
NeuronPerf Terminology
======================
* Model Inputs
- An individual input or ``list`` of inputs
- Example: ``inputs = [(torch.ones((batch_size, 5))) for batch_size in batch_sizes]``
- Each input is associated with the ``batch_sizes`` specified, in the same order
- Each input is fed individually to a corresponding model
- If an input is provided as a ``tuple``, it will be destructured to ``model(*input)`` to support multiple args
- See :ref:`neuronperf_framework_notes` for framework-specific requirements
* Latency
- Time to execute a single ``model(input)``
- Typically measured in milliseconds
* Model
- Your data model; varies by framework. See :ref:`neuronperf_framework_notes`
- Models may be wrapped by submodules (``torch``, ``tensorflow``, ``mxnet``) as callables
* Model Index
- A JSON file that tracks compiled model artifacts
* Model Inputs
- A ``tuple`` of inputs passed to a model, i.e. a single complete example
- Example: ``input = (torch.ones((5, 3, 224, 224)),)``
* Throughput
- Inferences / second
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_terminology:
NeuronPerf Terminology
======================
* Model Inputs
- An individual input or ``list`` of inputs
- Example: ``inputs = [(torch.ones((batch_size, 5))) for batch_size in batch_sizes]``
- Each input is associated with the ``batch_sizes`` specified, in the same order
- Each input is fed individually to a corresponding model
- If an input is provided as a ``tuple``, it will be destructured to ``model(*input)`` to support multiple args
- See :ref:`neuronperf_framework_notes` for framework-specific requirements
* Latency
- Time to execute a single ``model(input)``
- Typically measured in milliseconds
* Model
- Your data model; varies by framework. See :ref:`neuronperf_framework_notes`
- Models may be wrapped by submodules (``torch``, ``tensorflow``, ``mxnet``) as callables
* Model Index
- A JSON file that tracks compiled model artifacts
* Model Inputs
- A ``tuple`` of inputs passed to a model, i.e. a single complete example
- Example: ``input = (torch.ones((5, 3, 224, 224)),)``
* Throughput
- Inferences / second</pre></body></html>
|
2023-09-29T20:55:01.922Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_downloads/2b32cad937aaadbf8a56550d68196836/test_resnet50_pt.py
|
```
import torch
import torch_neuron
import neuronperf as npf
import neuronperf.torch
from torchvision import models
# Load a pretrained ResNet50 model
model = models.resnet50(pretrained=True)
# Select a few batch sizes to test
filename = 'resnet50.json'
batch_sizes = [5, 6, 7]
# Construct example inputs
inputs = [torch.zeros([batch_size, 3, 224, 224], dtype=torch.float32) for batch_size in batch_sizes]
# Compile
npf.torch.compile(
model,
inputs,
batch_sizes=batch_sizes,
filename=filename,
)
# Benchmark
reports = npf.torch.benchmark(filename, inputs)
# View and save results
npf.print_reports(reports)
npf.write_csv(reports, 'resnet50_results.csv')
npf.write_json(reports, 'resnet50_results.json')
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">import torch
import torch_neuron
import neuronperf as npf
import neuronperf.torch
from torchvision import models
# Load a pretrained ResNet50 model
model = models.resnet50(pretrained=True)
# Select a few batch sizes to test
filename = 'resnet50.json'
batch_sizes = [5, 6, 7]
# Construct example inputs
inputs = [torch.zeros([batch_size, 3, 224, 224], dtype=torch.float32) for batch_size in batch_sizes]
# Compile
npf.torch.compile(
model,
inputs,
batch_sizes=batch_sizes,
filename=filename,
)
# Benchmark
reports = npf.torch.benchmark(filename, inputs)
# View and save results
npf.print_reports(reports)
npf.write_csv(reports, 'resnet50_results.csv')
npf.write_json(reports, 'resnet50_results.json')
</pre></body></html>
|
2023-09-29T20:55:03.352Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_downloads/72804eddf9834dd7ec61a923a56a89d4/test_simple_pt.py
|
```
import torch
import torch.neuron
import neuronperf as npf
import neuronperf.torch
# Define a simple model
class Model(torch.nn.Module):
def forward(self, x):
x = x * 3
return x + 1
# Instantiate
model = Model()
model.eval()
# Define some inputs
batch_sizes = [1]
inputs = [torch.ones((batch_size, 3, 224, 224)) for batch_size in batch_sizes]
# Compile for Neuron
model_neuron = torch.neuron.trace(model, inputs)
model_neuron.save("model_neuron_b1.pt")
# Benchmark
reports = npf.torch.benchmark("model_neuron_b1.pt", inputs, batch_sizes)
# View and save results
npf.print_reports(reports)
npf.write_csv(reports, "model_neuron_b1.csv")
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">import torch
import torch.neuron
import neuronperf as npf
import neuronperf.torch
# Define a simple model
class Model(torch.nn.Module):
def forward(self, x):
x = x * 3
return x + 1
# Instantiate
model = Model()
model.eval()
# Define some inputs
batch_sizes = [1]
inputs = [torch.ones((batch_size, 3, 224, 224)) for batch_size in batch_sizes]
# Compile for Neuron
model_neuron = torch.neuron.trace(model, inputs)
model_neuron.save("model_neuron_b1.pt")
# Benchmark
reports = npf.torch.benchmark("model_neuron_b1.pt", inputs, batch_sizes)
# View and save results
npf.print_reports(reports)
npf.write_csv(reports, "model_neuron_b1.csv")
</pre></body></html>
|
2023-09-29T20:55:03.386Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_benchmark_guide.rst.txt
|
```
.. _neuronperf_benchmark_guide:
==========================
NeuronPerf Benchmark Guide
==========================
The call to ``neuronperf[torch/tensorflow/mxnet/cpu].benchmark`` is used to measure your model performance. It will choose reasonable defaults if none are provided, and will return back reports that summarize the benchmarking results.
What is the default behavior of ``benchmark``?
----------------------------------------------
That will depend how you provided your model and how your model was compiled.
The two most common ways to provide your model are:
#. Provide the path to your compiled model
#. Provide the path to a model index from ``neuronperf.compile`` (a JSON file)
Data Parallel
~~~~~~~~~~~~~
Your model is benchmarked on provided ``inputs`` in 4 different configurations:
#. A single model on 1 NeuronCore with one worker (min. latency)
#. A single model on 1 NeuronCore with two workers (max. throughput / NC)
#. ``MAX`` models on ``MAX`` NeuronCores with one worker (min. latency + max. instance usage)
#. ``MAX`` models on ``MAX`` NeuronCores with two workers (max. throughput + max. instance usage)
The value ``MAX`` is automatically determined by your instance size. If it can't be identified, those configurations will be skipped.
The primary benefit of (3) and (4) is to verify that your model scales well at maximum instance usage.
.. note::
If you provided the path to a model index from ``compile``:
* Your input parameters to ``benchmark`` (``batch_sizes``, etc.) are treated as filters on the index
* Each remaining model configuration is benchmarked as described in (1)
Pipeline
~~~~~~~~
Pipeline mode is active when using a Neuron device and ``pipeline_sizes > 1``. The same behavior as described in Data Parallel applies, except that only one worker configuration is executed: the optimal number of workers for your pipeline size, unless manually overridden.
Parameters
----------
Below are some useful and common parameters to tweak. Please see the :ref:`neuronperf_api` for full details.
* ``n_models`` controls how many models to load. The default behavior is ``n_models=[1, MAX]``.
* ``workers_per_model`` controls how many worker threads will be feeding inputs to each model. The default is automatically determined.
* ``pipeline_sizes`` tells the benchmarker how many cores are needed for your model so that each model instance can be loaded properly. Default is 1.
* ``duration`` controls how long to run each configuration.
* ``batch_sizes`` is used to inform the benchmarker of your input shape so that throughput can be computed correctly.
Almost all NeuronPerf behaviors are controllable via arguments found in the :ref:`neuronperf_api`. This guide attempts to provide some context and examples for those arguments.
Inputs
------
Models accept one or more inputs to operate on. Since NeuronPerf needs to support multiple inputs for multiple models, as well as multi-input models, there are some details that may need your attention. See the :ref:`neuronperf_framework_notes` for details.
Multi-input Models
~~~~~~~~~~~~~~~~~~
If your model accepts multiple inputs, you must provide them in a ``tuple``. For example, suppose you have a model like this:
.. code:: python
class Model(torch.nn.Module):
def forward(self, x, y, z):
...
return output
In order for NeuronPerf to pass along your multiple inputs correctly, you should provide them as a ``tuple``:
.. code:: python
inputs = (x, y, z)
npf.torch.benchmark(model_filename, inputs, ...)
If you are compiling and/or benchmarking multiple models, you can pass different sized inputs as a list of tuples:
.. code:: python
inputs = [(x1, y1, z1), (x2, y2, z2), ...]
npf.torch.benchmark(model_filename, inputs, ...)
Preprocessing and Postprocessing
--------------------------------
Many models have additional preprocessing and postprocessing steps involved that may add non-negligible overhead to inference time. NeuronPerf supports these use cases through the use of custom functions.
Preprocessing
~~~~~~~~~~~~~
Recall that NeuronPerf expects (or wraps) each model input into a ``tuple``. These tuples will be unpacked before calling your model.
Here is an example for a model with one input. The example multiples the input by 5 before inference.
.. code:: python
def preprocess_fn(x):
return x * 5
...
# Benchmark with custom preprocessing function
reports = npf.torch.benchmark(
filename,
inputs,
...,
preprocess_fn = preprocess_fn,
)
Or if your model expects multiple inputs:
.. code:: python
def preprocess_fn(x, y, z):
return x / 255, y / 255, z / 255
...
# Benchmark with custom preprocessing function
reports = npf.torch.benchmark(
filename,
inputs,
...,
preprocess_fn = preprocess_fn,
)
Postprocessing
~~~~~~~~~~~~~~
Postprocessing is almost identical to preprocessing, except that your function will receive whatever the output of your model is, exactly as returned without modification. There are no type guarantees.
.. code:: python
def postprocess_fn(x):
return x.argmax()
...
# Benchmark with custom preprocessing function
reports = npf.torch.benchmark(
filename,
inputs,
...,
postprocess_fn = postprocess_fn,
)
Minimal Latency
---------------
Suppose you are interested in the minimal latency achievable with your model. In this case, there is no need for more than one worker to execute at a time. We can manually specify the number of workers to use. See below :ref:`neuronperf_worker_threads`.
.. _neuronperf_worker_threads:
Worker Threads
--------------
The argument ``workers_per_model`` controls the number of worker threads that are trying to prepare and load examples onto a single NeuronCore at a time. Therefore, a value of 1 corresponds to 1 thread / model. If ``n_models=16``, then there would be 16 worker threads, one per model. This number is selected based upon whether you are using DataParallel (i.e. ``pipeline_sizes == 1``), or Pipeline Mode (``pipeline_sizes != 1``).
By default, NeuronPerf will try to pick try multiple combinations of model copies and workers. You may be interested in controlling this manually.
.. code:: python
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., workers_per_model=1)
You may also pass a list, as with other parameters:
.. code:: python
workers_per_model = [1, 2] # Same as the default for data parallel
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., workers_per_model=workers_per_model)
With the default number of :ref:`neuronperf_model_copies`, a call to ``print_results`` might look like this:
.. code:: bash
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
307.25 3.251 3.277 1 1 1 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
2746.0 5.641 6.82 16 1 1 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
329.5 6.053 6.108 1 1 2 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
2809.0 10.246 12.52 16 1 2 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
.. _neuronperf_model_copies:
Model Copies
------------
By default, NeuronPerf will benchmark two settings for ``n_models``:
1. A single copy
2. The maximum number number of copies for your instance size
You can override this behavior by passing ``n_models`` to ``benchmark``, as shown below:
.. code:: python
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., n_models=6)
or
.. code:: python
n_models = list(range(1, 10))
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., n_models=n_models)
.. _neuronperf_pipeline_mode:
Pipeline Mode
-------------
By default, NeuronPerf will assume you intend to use DataParallel, with two exceptions:
* You compiled your model using NeuronPerf for pipeline mode
* You constructed a :ref:`neuronperf_model_index` that uses pipeline mode
You can also manually tell NeuronPerf that your model was compiled for pipeline mode. It is similar to how other arguments are passed.
.. code:: python
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., pipeline_sizes=2)
If you are passing multiple models in an index, then you should pass a list for ``pipeline_sizes``.
.. code:: python
reports = npf.torch.benchmark('model_index.json', ..., pipeline_sizes=[1, 2, 3])
Duration
--------
NeuronPerf will benchmark each configuration specified for 60 seconds by default. You can control the duration by passing ``duration`` (in seconds).
.. code:: python
reports = npf.torch.benchmark('model_index.json', ..., duration=10)
.. warning::
If you make the duration too short, it may expire before all models are loaded and have had time to execute.
Custom Datasets (Beta)
----------------------
Currently, only PyTorch supports custom datasets, and the interface is subject to change. If you provide a custom dataset, it will be fully executed on each loaded model copy. So if you provide ``n_models=2``, your dataset will be run through twice in parallel.
To use this API, call ``benchmark`` passing a ``torch.utils.data.Dataset`` to ``inputs``. You can easily create your own ``Dataset`` by implementing the interface, or use one of the available datasets. For example:
.. code:: python
import torchvision
dataset = torchvision.datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
reports = npf.torch.benchmark('model_index.json', inputs=dataset, batch_sizes=[8], preprocess_fn=lambda x: x[0], loop_dataset=False)
.. note::
The ``preprocess_fn`` is required here to extract image input from the ``(image, label)`` tuple generated by dataloader. If the length of dataset is not sufficient to get the runtime performance, one can set ``loop_dataset=True`` to rerun dataset until certain duration.
Results
-------
Viewing and Saving
~~~~~~~~~~~~~~~~~~
There are currently three ways to view results.
- ``neuronperf.print_reports(...)``
- Dump abbrieviated results in your terminal
- ``neuronperf.write_csv(...)``
- Store metrics of interest as CSV
- ``neuronperf.write_json(...)``
- Store everything as JSON
See the :ref:`neuronperf_api` for full details.
Full Timing Results
~~~~~~~~~~~~~~~~~~~
NeuronPerf automatically combines and summarizes the detailed timing information collecting during benchmarking. If you wish to receive everything back yourself, you can use:
.. code:: python
results = npf.torch.benchmark('model_index.json', ..., return_timers=True)
If you later wish to produce reports the same way that NeuronPerf does internally, you can call:
.. code:: python
reports = npf.get_reports(results)
Verbosity
---------
Verbosity is an integer, currently one of ``{0, 1, 2}``, where:
* 0 = SILENT
* 1 = INFO (default)
* 2 = VERBOSE / DEBUG
Example:
.. code:: python
reports = npf.torch.benchmark(..., n_models=1, duration=5, verbosity=2)
.. code:: bash
DEBUG:neuronperf.benchmarking - Cast mode was not specified, assuming default.
INFO:neuronperf.benchmarking - Benchmarking 'resnet50.json', ~5 seconds remaining.
DEBUG:neuronperf.benchmarking - Running model config: {'model_filename': 'models/model_b1_p1_83bh3hhs.pt', 'device_type': 'neuron', 'input_idx': 0, 'batch_size': 1, 'n_models': 1, 'workers_per_model': 2, 'pipeline_size': 1, 'cast_mode': None, 'multiprocess': True, 'multiinterpreter': False, 'start_dts': '20211111-062818', 'duration': '5'}
DEBUG:neuronperf.benchmarking - Benchmarker 0 started.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 0 started.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 1 started.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 0 finished after 738 inferences.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 1 finished after 738 inferences.
DEBUG:neuronperf.benchmarking - Benchmarker 0 finished.
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
329.667 6.073 6.109 1 1 2 1 models/model_b1_p1_83bh3hhs.pt
Internal Process Model
----------------------
For each model loaded (see :ref:`neuronperf_model_copies`), a process is spawned. Each process may use multiple threads (see :ref:`neuronperf_worker_threads`). The threads will continue to load examples and keep the hardware busy.
NeuronPerf spawns processes slightly differently between frameworks. For PyTorch and Apache MXNet (Incubating), processes are forked. For Tensorflow/Keras, a fresh interpreter is launched, and benchmarkers are serialized and run as a script.
If you suspect you are having trouble due to the way processes are managed, you have two mechanisms of control:
.. code:: python
reports = npf.torch.benchmark(..., multiprocess=False)
Default is ``True``, and ``False`` will disable multiprocessing and run everything inside a single parent process. This may not work for all frameworks beyond the first model configuration, because process teardown is used to safely deallocate models from the hardware. It is not recommeneded to benchmark this way.
.. code:: python
reports = npf.torch.benchmark(..., multiinterpreter=True)
This flag controls whether a fresh interpreter is used instead of forking. Defaults to ``False`` except with Tensorflow/Keras.
.. _npf-cpu-gpu:
Benchmark on CPU or GPU
-----------------------
When benchmarking on CPU or GPU, the API is slightly different. With CPU or GPU, there is no compiled model to benchmark, so instead we need to directly pass a reference to the model class that will be instantiated.
.. note::
GPU benchmarking is currently only available for PyTorch.
CPU:
.. code:: python
cpu_reports = npf.cpu.benchmark(YourModelClass, ...)
GPU:
.. code:: python
gpu_reports = npf.torch.benchmark(YourModelClass, ..., device_type="gpu")
Your model class will be instantiated in a subprocess, so there are some things to keep in mind.
* Your model class must be defined at the top level inside a Python module
* i.e. don't place your model class definition inside a function or other nested scope
* If your model class has special Python module dependencies, consider importing them inside your class ``__init__``
* If your model class expects constructor arguments, wrap your class so that it has no constructor arguments
Example of a wrapped model class for CPU/GPU benchmarking:
.. code:: python
class ModelWrapper(torch.nn.Module):
def __init__(self):
super().__init__()
from transformers import AutoModelForSequenceClassification
model_name = "bert-base-cased"
self.bert = AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=False)
self.add_module(model_name, self.bert)
def forward(self, *inputs):
return self.bert(*inputs)
reports = npf.torch.benchmark(ModelWrapper, inputs, device_type="gpu")
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_benchmark_guide:
==========================
NeuronPerf Benchmark Guide
==========================
The call to ``neuronperf[torch/tensorflow/mxnet/cpu].benchmark`` is used to measure your model performance. It will choose reasonable defaults if none are provided, and will return back reports that summarize the benchmarking results.
What is the default behavior of ``benchmark``?
----------------------------------------------
That will depend how you provided your model and how your model was compiled.
The two most common ways to provide your model are:
#. Provide the path to your compiled model
#. Provide the path to a model index from ``neuronperf.compile`` (a JSON file)
Data Parallel
~~~~~~~~~~~~~
Your model is benchmarked on provided ``inputs`` in 4 different configurations:
#. A single model on 1 NeuronCore with one worker (min. latency)
#. A single model on 1 NeuronCore with two workers (max. throughput / NC)
#. ``MAX`` models on ``MAX`` NeuronCores with one worker (min. latency + max. instance usage)
#. ``MAX`` models on ``MAX`` NeuronCores with two workers (max. throughput + max. instance usage)
The value ``MAX`` is automatically determined by your instance size. If it can't be identified, those configurations will be skipped.
The primary benefit of (3) and (4) is to verify that your model scales well at maximum instance usage.
.. note::
If you provided the path to a model index from ``compile``:
* Your input parameters to ``benchmark`` (``batch_sizes``, etc.) are treated as filters on the index
* Each remaining model configuration is benchmarked as described in (1)
Pipeline
~~~~~~~~
Pipeline mode is active when using a Neuron device and ``pipeline_sizes > 1``. The same behavior as described in Data Parallel applies, except that only one worker configuration is executed: the optimal number of workers for your pipeline size, unless manually overridden.
Parameters
----------
Below are some useful and common parameters to tweak. Please see the :ref:`neuronperf_api` for full details.
* ``n_models`` controls how many models to load. The default behavior is ``n_models=[1, MAX]``.
* ``workers_per_model`` controls how many worker threads will be feeding inputs to each model. The default is automatically determined.
* ``pipeline_sizes`` tells the benchmarker how many cores are needed for your model so that each model instance can be loaded properly. Default is 1.
* ``duration`` controls how long to run each configuration.
* ``batch_sizes`` is used to inform the benchmarker of your input shape so that throughput can be computed correctly.
Almost all NeuronPerf behaviors are controllable via arguments found in the :ref:`neuronperf_api`. This guide attempts to provide some context and examples for those arguments.
Inputs
------
Models accept one or more inputs to operate on. Since NeuronPerf needs to support multiple inputs for multiple models, as well as multi-input models, there are some details that may need your attention. See the :ref:`neuronperf_framework_notes` for details.
Multi-input Models
~~~~~~~~~~~~~~~~~~
If your model accepts multiple inputs, you must provide them in a ``tuple``. For example, suppose you have a model like this:
.. code:: python
class Model(torch.nn.Module):
def forward(self, x, y, z):
...
return output
In order for NeuronPerf to pass along your multiple inputs correctly, you should provide them as a ``tuple``:
.. code:: python
inputs = (x, y, z)
npf.torch.benchmark(model_filename, inputs, ...)
If you are compiling and/or benchmarking multiple models, you can pass different sized inputs as a list of tuples:
.. code:: python
inputs = [(x1, y1, z1), (x2, y2, z2), ...]
npf.torch.benchmark(model_filename, inputs, ...)
Preprocessing and Postprocessing
--------------------------------
Many models have additional preprocessing and postprocessing steps involved that may add non-negligible overhead to inference time. NeuronPerf supports these use cases through the use of custom functions.
Preprocessing
~~~~~~~~~~~~~
Recall that NeuronPerf expects (or wraps) each model input into a ``tuple``. These tuples will be unpacked before calling your model.
Here is an example for a model with one input. The example multiples the input by 5 before inference.
.. code:: python
def preprocess_fn(x):
return x * 5
...
# Benchmark with custom preprocessing function
reports = npf.torch.benchmark(
filename,
inputs,
...,
preprocess_fn = preprocess_fn,
)
Or if your model expects multiple inputs:
.. code:: python
def preprocess_fn(x, y, z):
return x / 255, y / 255, z / 255
...
# Benchmark with custom preprocessing function
reports = npf.torch.benchmark(
filename,
inputs,
...,
preprocess_fn = preprocess_fn,
)
Postprocessing
~~~~~~~~~~~~~~
Postprocessing is almost identical to preprocessing, except that your function will receive whatever the output of your model is, exactly as returned without modification. There are no type guarantees.
.. code:: python
def postprocess_fn(x):
return x.argmax()
...
# Benchmark with custom preprocessing function
reports = npf.torch.benchmark(
filename,
inputs,
...,
postprocess_fn = postprocess_fn,
)
Minimal Latency
---------------
Suppose you are interested in the minimal latency achievable with your model. In this case, there is no need for more than one worker to execute at a time. We can manually specify the number of workers to use. See below :ref:`neuronperf_worker_threads`.
.. _neuronperf_worker_threads:
Worker Threads
--------------
The argument ``workers_per_model`` controls the number of worker threads that are trying to prepare and load examples onto a single NeuronCore at a time. Therefore, a value of 1 corresponds to 1 thread / model. If ``n_models=16``, then there would be 16 worker threads, one per model. This number is selected based upon whether you are using DataParallel (i.e. ``pipeline_sizes == 1``), or Pipeline Mode (``pipeline_sizes != 1``).
By default, NeuronPerf will try to pick try multiple combinations of model copies and workers. You may be interested in controlling this manually.
.. code:: python
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., workers_per_model=1)
You may also pass a list, as with other parameters:
.. code:: python
workers_per_model = [1, 2] # Same as the default for data parallel
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., workers_per_model=workers_per_model)
With the default number of :ref:`neuronperf_model_copies`, a call to ``print_results`` might look like this:
.. code:: bash
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
307.25 3.251 3.277 1 1 1 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
2746.0 5.641 6.82 16 1 1 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
329.5 6.053 6.108 1 1 2 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
2809.0 10.246 12.52 16 1 2 1 models/a5cff386-89ca-4bbf-9087-d0e624c3c604.pt
.. _neuronperf_model_copies:
Model Copies
------------
By default, NeuronPerf will benchmark two settings for ``n_models``:
1. A single copy
2. The maximum number number of copies for your instance size
You can override this behavior by passing ``n_models`` to ``benchmark``, as shown below:
.. code:: python
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., n_models=6)
or
.. code:: python
n_models = list(range(1, 10))
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., n_models=n_models)
.. _neuronperf_pipeline_mode:
Pipeline Mode
-------------
By default, NeuronPerf will assume you intend to use DataParallel, with two exceptions:
* You compiled your model using NeuronPerf for pipeline mode
* You constructed a :ref:`neuronperf_model_index` that uses pipeline mode
You can also manually tell NeuronPerf that your model was compiled for pipeline mode. It is similar to how other arguments are passed.
.. code:: python
reports = npf.torch.benchmark('model_neuron_b1.pt', ..., pipeline_sizes=2)
If you are passing multiple models in an index, then you should pass a list for ``pipeline_sizes``.
.. code:: python
reports = npf.torch.benchmark('model_index.json', ..., pipeline_sizes=[1, 2, 3])
Duration
--------
NeuronPerf will benchmark each configuration specified for 60 seconds by default. You can control the duration by passing ``duration`` (in seconds).
.. code:: python
reports = npf.torch.benchmark('model_index.json', ..., duration=10)
.. warning::
If you make the duration too short, it may expire before all models are loaded and have had time to execute.
Custom Datasets (Beta)
----------------------
Currently, only PyTorch supports custom datasets, and the interface is subject to change. If you provide a custom dataset, it will be fully executed on each loaded model copy. So if you provide ``n_models=2``, your dataset will be run through twice in parallel.
To use this API, call ``benchmark`` passing a ``torch.utils.data.Dataset`` to ``inputs``. You can easily create your own ``Dataset`` by implementing the interface, or use one of the available datasets. For example:
.. code:: python
import torchvision
dataset = torchvision.datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
reports = npf.torch.benchmark('model_index.json', inputs=dataset, batch_sizes=[8], preprocess_fn=lambda x: x[0], loop_dataset=False)
.. note::
The ``preprocess_fn`` is required here to extract image input from the ``(image, label)`` tuple generated by dataloader. If the length of dataset is not sufficient to get the runtime performance, one can set ``loop_dataset=True`` to rerun dataset until certain duration.
Results
-------
Viewing and Saving
~~~~~~~~~~~~~~~~~~
There are currently three ways to view results.
- ``neuronperf.print_reports(...)``
- Dump abbrieviated results in your terminal
- ``neuronperf.write_csv(...)``
- Store metrics of interest as CSV
- ``neuronperf.write_json(...)``
- Store everything as JSON
See the :ref:`neuronperf_api` for full details.
Full Timing Results
~~~~~~~~~~~~~~~~~~~
NeuronPerf automatically combines and summarizes the detailed timing information collecting during benchmarking. If you wish to receive everything back yourself, you can use:
.. code:: python
results = npf.torch.benchmark('model_index.json', ..., return_timers=True)
If you later wish to produce reports the same way that NeuronPerf does internally, you can call:
.. code:: python
reports = npf.get_reports(results)
Verbosity
---------
Verbosity is an integer, currently one of ``{0, 1, 2}``, where:
* 0 = SILENT
* 1 = INFO (default)
* 2 = VERBOSE / DEBUG
Example:
.. code:: python
reports = npf.torch.benchmark(..., n_models=1, duration=5, verbosity=2)
.. code:: bash
DEBUG:neuronperf.benchmarking - Cast mode was not specified, assuming default.
INFO:neuronperf.benchmarking - Benchmarking 'resnet50.json', ~5 seconds remaining.
DEBUG:neuronperf.benchmarking - Running model config: {'model_filename': 'models/model_b1_p1_83bh3hhs.pt', 'device_type': 'neuron', 'input_idx': 0, 'batch_size': 1, 'n_models': 1, 'workers_per_model': 2, 'pipeline_size': 1, 'cast_mode': None, 'multiprocess': True, 'multiinterpreter': False, 'start_dts': '20211111-062818', 'duration': '5'}
DEBUG:neuronperf.benchmarking - Benchmarker 0 started.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 0 started.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 1 started.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 0 finished after 738 inferences.
DEBUG:neuronperf.benchmarking - Benchmarker 0, Worker 1 finished after 738 inferences.
DEBUG:neuronperf.benchmarking - Benchmarker 0 finished.
throughput_avg latency_ms_p50 latency_ms_p99 n_models pipeline_size workers_per_model batch_size model_filename
329.667 6.073 6.109 1 1 2 1 models/model_b1_p1_83bh3hhs.pt
Internal Process Model
----------------------
For each model loaded (see :ref:`neuronperf_model_copies`), a process is spawned. Each process may use multiple threads (see :ref:`neuronperf_worker_threads`). The threads will continue to load examples and keep the hardware busy.
NeuronPerf spawns processes slightly differently between frameworks. For PyTorch and Apache MXNet (Incubating), processes are forked. For Tensorflow/Keras, a fresh interpreter is launched, and benchmarkers are serialized and run as a script.
If you suspect you are having trouble due to the way processes are managed, you have two mechanisms of control:
.. code:: python
reports = npf.torch.benchmark(..., multiprocess=False)
Default is ``True``, and ``False`` will disable multiprocessing and run everything inside a single parent process. This may not work for all frameworks beyond the first model configuration, because process teardown is used to safely deallocate models from the hardware. It is not recommeneded to benchmark this way.
.. code:: python
reports = npf.torch.benchmark(..., multiinterpreter=True)
This flag controls whether a fresh interpreter is used instead of forking. Defaults to ``False`` except with Tensorflow/Keras.
.. _npf-cpu-gpu:
Benchmark on CPU or GPU
-----------------------
When benchmarking on CPU or GPU, the API is slightly different. With CPU or GPU, there is no compiled model to benchmark, so instead we need to directly pass a reference to the model class that will be instantiated.
.. note::
GPU benchmarking is currently only available for PyTorch.
CPU:
.. code:: python
cpu_reports = npf.cpu.benchmark(YourModelClass, ...)
GPU:
.. code:: python
gpu_reports = npf.torch.benchmark(YourModelClass, ..., device_type="gpu")
Your model class will be instantiated in a subprocess, so there are some things to keep in mind.
* Your model class must be defined at the top level inside a Python module
* i.e. don't place your model class definition inside a function or other nested scope
* If your model class has special Python module dependencies, consider importing them inside your class ``__init__``
* If your model class expects constructor arguments, wrap your class so that it has no constructor arguments
Example of a wrapped model class for CPU/GPU benchmarking:
.. code:: python
class ModelWrapper(torch.nn.Module):
def __init__(self):
super().__init__()
from transformers import AutoModelForSequenceClassification
model_name = "bert-base-cased"
self.bert = AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=False)
self.add_module(model_name, self.bert)
def forward(self, *inputs):
return self.bert(*inputs)
reports = npf.torch.benchmark(ModelWrapper, inputs, device_type="gpu")
</pre></body></html>
|
2023-09-29T20:55:03.402Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/rn.rst.txt
|
```
What's New
==========
.. toctree::
:maxdepth: 1
/release-notes/tools/neuronperf
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">What's New
==========
.. toctree::
:maxdepth: 1
/release-notes/tools/neuronperf
</pre></body></html>
|
2023-09-29T20:55:04.054Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_evaluate_guide.rst.txt
|
```
.. _neuronperf_evaluate_guide:
==========================
NeuronPerf Evaluate Guide
==========================
NeuronPerf has a new API for evaluating model accuracy on Neuron hardware. This API is currently only available for PyTorch.
You can access the API through standard ``benchmark()`` by passing an additional kwarg, ``eval_metrics``.
For example:
.. code:: python
reports = npf.torch.benchmark(
model_index_or_path,
dataset,
n_models=1,
workers_per_model=2,
duration=0,
eval_metrics=['accuracy', 'precision']
)
In this example, we fix ``n_models`` and ``n_workers`` because replicating the same model will not impact accuracy. We also set ``duration=0`` to allow benchmarking to run untimed through all dataset examples.
Because this call can be tedious to type, a convenience function is provided:
.. code:: python
reports = npf.torch.evaluate(model_index_or_path, dataset, metrics=['accuracy', 'precision'])
.. note:
Please note that ``eval_metrics`` becomes ``metrics`` when using ``evaluate``.
The ``dataset`` can be any iterable object that produces ``tuple(*INPUTS, TARGET)``.
If ``TARGET`` does not appear in the last column for your dataset, you can customize this by passing ``eval_target_col``.
For example:
.. code:: python
reports = npf.torch.evaluate(model_index_or_path, dataset, metrics='accuracy', eval_target_col=1)
You can list the currently available metrics.
.. code:: python
>>> npf.list_metrics() │·····
Name Description │·····
Accuracy (TP + TN) / (TP + TN + FP + FN) │·····
TruePositiveRate TP / (TP + FN) │·····
Sensitivity Alias for TruePositiveRate │·····
Recall Alias for TruePositiveRate │·····
Hit Rate Alias for TruePositiveRate │·····
TrueNegativeRate TN / (TN + FP) │·····
Specificity Alias for TrueNegativeRate │·····
Selectivity Alias for TrueNegativeRate │·····
PositivePredictiveValue TP / (TP + FP) │·····
Precision Alias for PositivePredictiveValue │·····
NegativePredictiveValue TN / (TN + FN) │·····
FalseNegativeRate FN / (FN + TP) │·····
FalsePositiveRate FP / (FP + TN) │·····
FalseDiscoveryRate FP / (FP + TN) │·····
FalseOmissionRate FP / (FP + TP) │·····
PositiveLikelihoodRatio TPR / FPR │·····
NegativeLikelihoodRatio FNR / TNR │·····
PrevalenceThreshold sqrt(FPR) / (sqrt(FPR) + sqrt(TPR)) │·····
ThreatScore TP / (TP + FN + FP) │·····
F1Score 2TP / (2TP + FN + FP) │·····
MeanAbsoluteError sum(|y - x|) / n │·····
MeanSquaredError sum((y - x)^2) / n
New metrics may appear in the list after importing a submodule. For example, ``import neuronperf.torch`` will register a new ``topk`` metric.
Custom Metrics
--------------
Simple Variants
===============
If you wish to register a metric that is a slight tweak of an existing metric with different ``init`` args, you can use ``register_metric_from_existing()``:
.. code:: python
npf.register_metric_from_existing("topk", "topk_3", k=3)
This example registers a new metric ``topk_3`` from existing metric ``topk``, passing ``k=3`` as at ``init`` time.
New Metrics
===========
You can register your own metrics using ``register_metric()``.
You metrics must extend ``BaseEvalMetric``:
.. code:: python
class BaseEvalMetric(ABC):
"""
Abstract base class BaseEvalMetric from which other metrics inherit.
"""
@abstractmethod
def process_record(self, output: Any = None, target: Any = None) -> None:
"""Process an individual record and return the result."""
pass
@staticmethod
def aggregate(metrics: Iterable["BaseEvalMetric"]) -> Any:
"""Combine a sequence of metrics into a single result."""
raise NotImplementedError
For example:
.. code:: python
import neuronperf as npf
class MyCustomMetric(npf.BaseEvalMetric):
def __init__(self):
super().__init__()
self.passing = 0
self.processed = 0
def process_record(self, outputs, target):
self.processed += 1
if outputs == target:
self.passing += 1
@staticmethod
def aggregate(metrics):
passing = 0
processed = 0
for metric in metrics:
passing += metric.passing
processed += metric.processed
return passing / processed if processed else 0
npf.register_metric("MyCustomMetric", MyCustomMetric)
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_evaluate_guide:
==========================
NeuronPerf Evaluate Guide
==========================
NeuronPerf has a new API for evaluating model accuracy on Neuron hardware. This API is currently only available for PyTorch.
You can access the API through standard ``benchmark()`` by passing an additional kwarg, ``eval_metrics``.
For example:
.. code:: python
reports = npf.torch.benchmark(
model_index_or_path,
dataset,
n_models=1,
workers_per_model=2,
duration=0,
eval_metrics=['accuracy', 'precision']
)
In this example, we fix ``n_models`` and ``n_workers`` because replicating the same model will not impact accuracy. We also set ``duration=0`` to allow benchmarking to run untimed through all dataset examples.
Because this call can be tedious to type, a convenience function is provided:
.. code:: python
reports = npf.torch.evaluate(model_index_or_path, dataset, metrics=['accuracy', 'precision'])
.. note:
Please note that ``eval_metrics`` becomes ``metrics`` when using ``evaluate``.
The ``dataset`` can be any iterable object that produces ``tuple(*INPUTS, TARGET)``.
If ``TARGET`` does not appear in the last column for your dataset, you can customize this by passing ``eval_target_col``.
For example:
.. code:: python
reports = npf.torch.evaluate(model_index_or_path, dataset, metrics='accuracy', eval_target_col=1)
You can list the currently available metrics.
.. code:: python
>>> npf.list_metrics() │·····
Name Description │·····
Accuracy (TP + TN) / (TP + TN + FP + FN) │·····
TruePositiveRate TP / (TP + FN) │·····
Sensitivity Alias for TruePositiveRate │·····
Recall Alias for TruePositiveRate │·····
Hit Rate Alias for TruePositiveRate │·····
TrueNegativeRate TN / (TN + FP) │·····
Specificity Alias for TrueNegativeRate │·····
Selectivity Alias for TrueNegativeRate │·····
PositivePredictiveValue TP / (TP + FP) │·····
Precision Alias for PositivePredictiveValue │·····
NegativePredictiveValue TN / (TN + FN) │·····
FalseNegativeRate FN / (FN + TP) │·····
FalsePositiveRate FP / (FP + TN) │·····
FalseDiscoveryRate FP / (FP + TN) │·····
FalseOmissionRate FP / (FP + TP) │·····
PositiveLikelihoodRatio TPR / FPR │·····
NegativeLikelihoodRatio FNR / TNR │·····
PrevalenceThreshold sqrt(FPR) / (sqrt(FPR) + sqrt(TPR)) │·····
ThreatScore TP / (TP + FN + FP) │·····
F1Score 2TP / (2TP + FN + FP) │·····
MeanAbsoluteError sum(|y - x|) / n │·····
MeanSquaredError sum((y - x)^2) / n
New metrics may appear in the list after importing a submodule. For example, ``import neuronperf.torch`` will register a new ``topk`` metric.
Custom Metrics
--------------
Simple Variants
===============
If you wish to register a metric that is a slight tweak of an existing metric with different ``init`` args, you can use ``register_metric_from_existing()``:
.. code:: python
npf.register_metric_from_existing("topk", "topk_3", k=3)
This example registers a new metric ``topk_3`` from existing metric ``topk``, passing ``k=3`` as at ``init`` time.
New Metrics
===========
You can register your own metrics using ``register_metric()``.
You metrics must extend ``BaseEvalMetric``:
.. code:: python
class BaseEvalMetric(ABC):
"""
Abstract base class BaseEvalMetric from which other metrics inherit.
"""
@abstractmethod
def process_record(self, output: Any = None, target: Any = None) -> None:
"""Process an individual record and return the result."""
pass
@staticmethod
def aggregate(metrics: Iterable["BaseEvalMetric"]) -> Any:
"""Combine a sequence of metrics into a single result."""
raise NotImplementedError
For example:
.. code:: python
import neuronperf as npf
class MyCustomMetric(npf.BaseEvalMetric):
def __init__(self):
super().__init__()
self.passing = 0
self.processed = 0
def process_record(self, outputs, target):
self.processed += 1
if outputs == target:
self.passing += 1
@staticmethod
def aggregate(metrics):
passing = 0
processed = 0
for metric in metrics:
passing += metric.passing
processed += metric.processed
return passing / processed if processed else 0
npf.register_metric("MyCustomMetric", MyCustomMetric)
</pre></body></html>
|
2023-09-29T20:55:04.064Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_faq.rst.txt
|
```
.. _neuronperf_faq:
NeuronPerf FAQ
==============
.. contents:: Table of contents
:local:
:depth: 1
When should I use NeuronPerf?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you want to measure the highest achievable performance for your model with Neuron.
When should I **not** use NeuronPerf?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When measuring end-to-end performance that includes your network serving stack. Instead, your should compare your e2e numbers to those obtained by NeuronPerf to optimize your serving overhead.
Which frameworks does NeuronPerf support?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See :ref:`neuronperf_framework_notes`.
Which Neuron instance types does NeuronPerf support?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PyTorch and TensorFlow support all instance types.
MXNet support is limited to inf1.
Is NeuronPerf Open Source?
^^^^^^^^^^^^^^^^^^^^^^^^^^
Yes. You can :download:`download the source here </src/neuronperf.tar.gz>`.
What is the secret to obtaining the best numbers?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There is no secret sauce. NeuronPerf follows best practices.
What are the "best practices" that NeuronPerf uses?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- These vary slightly by framework and how your model was compiled
- For a model compiled for a single NeuronCore (DataParallel):
- To maximize throughput, for ``N`` models, use ``2 * N`` worker threads
- To minimize latency, use 1 worker thread per model
- Use a new Python process for each model to avoid GIL contention
- Ensure you benchmark long enough for your numbers to stabilize
- Ignore outliers at the start and end of inference benchmarking
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_faq:
NeuronPerf FAQ
==============
.. contents:: Table of contents
:local:
:depth: 1
When should I use NeuronPerf?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you want to measure the highest achievable performance for your model with Neuron.
When should I **not** use NeuronPerf?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When measuring end-to-end performance that includes your network serving stack. Instead, your should compare your e2e numbers to those obtained by NeuronPerf to optimize your serving overhead.
Which frameworks does NeuronPerf support?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See :ref:`neuronperf_framework_notes`.
Which Neuron instance types does NeuronPerf support?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PyTorch and TensorFlow support all instance types.
MXNet support is limited to inf1.
Is NeuronPerf Open Source?
^^^^^^^^^^^^^^^^^^^^^^^^^^
Yes. You can :download:`download the source here </src/neuronperf.tar.gz>`.
What is the secret to obtaining the best numbers?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There is no secret sauce. NeuronPerf follows best practices.
What are the "best practices" that NeuronPerf uses?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- These vary slightly by framework and how your model was compiled
- For a model compiled for a single NeuronCore (DataParallel):
- To maximize throughput, for ``N`` models, use ``2 * N`` worker threads
- To minimize latency, use 1 worker thread per model
- Use a new Python process for each model to avoid GIL contention
- Ensure you benchmark long enough for your numbers to stabilize
- Ignore outliers at the start and end of inference benchmarking
</pre></body></html>
|
2023-09-29T20:55:04.101Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_framework_notes.rst.txt
|
```
.. _neuronperf_framework_notes:
==========================
NeuronPerf Framework Notes
==========================
PyTorch
=======
* Requires: ``torch-neuron`` or ``torch-neuronx``
- Versions: 1.7.x, 1.8.x, 1.9.x, 1.10.x, 1.11.x, 1.12.x, 1.13.x
* Input to ``compile``: ``torch.nn.Module``
* Model inputs: ``Any``.
TensorFlow 1.x
==============
* Requires: ``tensorflow-neuron``
- Versions: All
* Input to ``compile``: Path to uncompiled model dir from ``saved_model.simple_save``
* Model inputs: Tensors must be provided as ``numpy.ndarray``
.. note::
Although TensorFlow *tensors* must be ``ndarray``, this doesn't stop you from wrapping them inside of data structures that traverse process boundaries safely. For example, you can still pass an input ``dict`` like ``{'input_0': np.zeros((2, 1))}``.
TensorFlow 2.x
==============
* Requires: ``tensorflow-neuron`` or ``tensorflow-neuronx``
- Versions: All
* Input to ``compile``: ``tf.keras.Model``
* Model inputs: Tensors must be provided as ``numpy.ndarray``
.. note::
Although TensorFlow *tensors* must be ``ndarray``, this doesn't stop you from wrapping them inside of data structures that traverse process boundaries safely. For example, you can still pass an input ``dict`` like ``{'input_0': np.zeros((2, 1))}``.
Apache MXNet (Incubating)
=========================
* Requires: ``mxnet-neuron``
- Versions 1.5, 1.8
* Input to ``compile``: ``tuple(sym, args, aux)``
* Inputs: Tensors must be provided as ``mxnet.ndarray`` or ``numpy.ndarray``
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_framework_notes:
==========================
NeuronPerf Framework Notes
==========================
PyTorch
=======
* Requires: ``torch-neuron`` or ``torch-neuronx``
- Versions: 1.7.x, 1.8.x, 1.9.x, 1.10.x, 1.11.x, 1.12.x, 1.13.x
* Input to ``compile``: ``torch.nn.Module``
* Model inputs: ``Any``.
TensorFlow 1.x
==============
* Requires: ``tensorflow-neuron``
- Versions: All
* Input to ``compile``: Path to uncompiled model dir from ``saved_model.simple_save``
* Model inputs: Tensors must be provided as ``numpy.ndarray``
.. note::
Although TensorFlow *tensors* must be ``ndarray``, this doesn't stop you from wrapping them inside of data structures that traverse process boundaries safely. For example, you can still pass an input ``dict`` like ``{'input_0': np.zeros((2, 1))}``.
TensorFlow 2.x
==============
* Requires: ``tensorflow-neuron`` or ``tensorflow-neuronx``
- Versions: All
* Input to ``compile``: ``tf.keras.Model``
* Model inputs: Tensors must be provided as ``numpy.ndarray``
.. note::
Although TensorFlow *tensors* must be ``ndarray``, this doesn't stop you from wrapping them inside of data structures that traverse process boundaries safely. For example, you can still pass an input ``dict`` like ``{'input_0': np.zeros((2, 1))}``.
Apache MXNet (Incubating)
=========================
* Requires: ``mxnet-neuron``
- Versions 1.5, 1.8
* Input to ``compile``: ``tuple(sym, args, aux)``
* Inputs: Tensors must be provided as ``mxnet.ndarray`` or ``numpy.ndarray``
</pre></body></html>
|
2023-09-29T20:55:04.141Z
|
|
https://awsdocs-neuron.readthedocs-hosted.com/en/v2.14.1/_sources/tools/neuronperf/neuronperf_troubleshooting.rst.txt
|
```
.. _neuronperf_troubleshooting:
NeuronPerf Troubleshooting
==========================
.. contents:: Table of contents
:local:
:depth: 2
Compilation issues
^^^^^^^^^^^^^^^^^^
Model fails to compile
~~~~~~~~~~~~~~~~~~~~~~
Please `file a bug <https://github.com/aws/aws-neuron-sdk/issues>`_ with as much information as possible.
Benchmarking Issues
^^^^^^^^^^^^^^^^^^^
Benchmarking terminates early with errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Scroll up and read the output. Most likely causes are:
- invalid input shapes or
- not enough memory to load the requested number of model copies on the device. Try passing ``n_models=1`` to ``benchmark`` again to test for memory issues.
Other Issues or Feature Requests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please file a bug on `Github <https://github.com/aws/aws-neuron-sdk/issues>`_.
```
|
<html><head><meta name="color-scheme" content="light dark"></head><body><pre style="word-wrap: break-word; white-space: pre-wrap;">.. _neuronperf_troubleshooting:
NeuronPerf Troubleshooting
==========================
.. contents:: Table of contents
:local:
:depth: 2
Compilation issues
^^^^^^^^^^^^^^^^^^
Model fails to compile
~~~~~~~~~~~~~~~~~~~~~~
Please `file a bug <https://github.com/aws/aws-neuron-sdk/issues>`_ with as much information as possible.
Benchmarking Issues
^^^^^^^^^^^^^^^^^^^
Benchmarking terminates early with errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Scroll up and read the output. Most likely causes are:
- invalid input shapes or
- not enough memory to load the requested number of model copies on the device. Try passing ``n_models=1`` to ``benchmark`` again to test for memory issues.
Other Issues or Feature Requests
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Please file a bug on `Github <https://github.com/aws/aws-neuron-sdk/issues>`_.</pre></body></html>
|
2023-09-29T20:55:04.177Z
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.