Skip to content

Commit 67e320c

Browse files
author
Torch-TensorRT Github Bot
committed
docs: [Automated] Regenerating documenation for 919b9e9
Signed-off-by: Torch-TensorRT Github Bot <[email protected]>
1 parent 919b9e9 commit 67e320c

19 files changed

+43
-40
lines changed

docs/_notebooks/CitriNet-example.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -857,7 +857,7 @@
857857
</div>
858858
</div>
859859
<p>
860-
<img alt="16b97c6c0c8746a9ad5b95a20bbb48b7" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
860+
<img alt="06dbf5b7da64440da85bb0a3a1fa0d39" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
861861
</p>
862862
<section id="Torch-TensorRT-Getting-Started---CitriNet">
863863
<h1 id="notebooks-citrinet-example--page-root">

docs/_notebooks/EfficientNet-example.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -857,7 +857,7 @@
857857
</div>
858858
</div>
859859
<p>
860-
<img alt="2eb6e25acb6241e5a7fb71e7bbe4a514" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
860+
<img alt="014c3196047e4928abffdab4c053b10c" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
861861
</p>
862862
<section id="Torch-TensorRT-Getting-Started---EfficientNet-B0">
863863
<h1 id="notebooks-efficientnet-example--page-root">

docs/_notebooks/Hugging-Face-BERT.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -809,7 +809,7 @@
809809
</div>
810810
</div>
811811
<p>
812-
<img alt="01cc7685bbcc49698ca422d848cd686b" src="https://developer.download.nvidia.com/tesla/notebook_assets/nv_logo_torch_trt_resnet_notebook.png"/>
812+
<img alt="ebee052e327d47b19490e23186c1589c" src="https://developer.download.nvidia.com/tesla/notebook_assets/nv_logo_torch_trt_resnet_notebook.png"/>
813813
</p>
814814
<section id="Masked-Language-Modeling-(MLM)-with-Hugging-Face-BERT-Transformer">
815815
<h1 id="notebooks-hugging-face-bert--page-root">

docs/_notebooks/Resnet50-example.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -857,7 +857,7 @@
857857
</div>
858858
</div>
859859
<p>
860-
<img alt="4bc581df48894031abeaeade842f63e6" src="https://developer.download.nvidia.com/tesla/notebook_assets/nv_logo_torch_trt_resnet_notebook.png"/>
860+
<img alt="6a06541e0a714d89adfee0ad1925a35b" src="https://developer.download.nvidia.com/tesla/notebook_assets/nv_logo_torch_trt_resnet_notebook.png"/>
861861
</p>
862862
<section id="Torch-TensorRT-Getting-Started---ResNet-50">
863863
<h1 id="notebooks-resnet50-example--page-root">

docs/_notebooks/lenet-getting-started.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -895,7 +895,7 @@
895895
</div>
896896
</div>
897897
<p>
898-
<img alt="95168dd749c543dea8d29528fa4769a5" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
898+
<img alt="0b18389c4b4949eba10f5d6e5d2beac1" src="http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png"/>
899899
</p>
900900
<section id="Torch-TensorRT-Getting-Started---LeNet">
901901
<h1 id="notebooks-lenet-getting-started--page-root">

docs/_notebooks/ssd-object-detection-demo.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -915,7 +915,7 @@
915915
</div>
916916
</div>
917917
<p>
918-
<img alt="93aa88092fbf4efeb380fcbe20133f1a" src="https://developer.download.nvidia.com/tesla/notebook_assets/nv_logo_torch_trt_ssd_notebook.png"/>
918+
<img alt="89d2ee3efaf040128b4ca16a897787e3" src="https://developer.download.nvidia.com/tesla/notebook_assets/nv_logo_torch_trt_ssd_notebook.png"/>
919919
</p>
920920
<section id="Object-Detection-with-Torch-TensorRT-(SSD)">
921921
<h1 id="notebooks-ssd-object-detection-demo--page-root">

docs/_notebooks/vgg-qat.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -922,7 +922,7 @@ <h2 id="Overview">
922922
</div>
923923
<p>
924924
## 2. VGG16 Overview ### Very Deep Convolutional Networks for Large-Scale Image Recognition VGG is one of the earliest family of image classification networks that first used small (3x3) convolution filters and achieved significant improvements on ImageNet recognition challenge. The network architecture looks as follows
925-
<img alt="ba2d50f0e2f940918ef09b03f7da5774" src="https://neurohive.io/wp-content/uploads/2018/11/vgg16-1-e1542731207177.png"/>
925+
<img alt="108b3227d92f48dfac1e036b6a225fd3" src="https://neurohive.io/wp-content/uploads/2018/11/vgg16-1-e1542731207177.png"/>
926926
</p>
927927
<p>
928928
## 3. Training a baseline VGG16 model We train VGG16 on CIFAR10 dataset. Define training and testing datasets and dataloaders. This will download the CIFAR 10 data in your

docs/_sources/contributors/conversion.rst.txt

+2-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ inputs and assemble an array of resources to pass to the converter. Inputs can b
1919

2020
* The input is an output of a node that has already been converted
2121

22-
* In this case the ITensor of the output has added to the to the ``value_tensor_map``,
22+
* In this case the ITensor of the output has added to the ``value_tensor_map``,
2323
The conversion stage will add the ITensor to the list of args for the converter
2424

2525
* The input is from a node that produces a static value
@@ -32,6 +32,7 @@ inputs and assemble an array of resources to pass to the converter. Inputs can b
3232
static value has been evaluated
3333

3434
* The input is from a node that has not been converted
35+
3536
* Torch-TensorRT will error out here
3637

3738
Node Evaluation

docs/_sources/contributors/lowering.rst.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ Removes _all_ tuples and raises an error if some cannot be removed, this is used
134134
Module Fallback
135135
*****************
136136

137-
`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`
137+
`Torch-TensorRT/core/lowering/passes/module_fallback.cpp <https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp>`_
138138

139139
Module fallback consists of two lowering passes that must be run as a pair. The first pass is run before freezing to place delimiters in the graph around modules
140140
that should run in PyTorch. The second pass marks nodes between these delimiters after freezing to signify they should run in PyTorch.

docs/_sources/contributors/partitioning.rst.txt

+3-3
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,6 @@
33
Partitioning Phase
44
====================
55

6-
The phase is optional and enabled by the user. It instructs the compiler to seperate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
7-
Criteria for seperation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
8-
run in PyTorch by the module fallback passes.
6+
The phase is optional and enabled by the user. It instructs the compiler to separate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
7+
Criteria for separation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
8+
run in PyTorch by the module fallback passes.

docs/_sources/contributors/phases.rst.txt

+3-3
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,10 @@ TensorRT.
2121

2222
Partitioning
2323
^^^^^^^^^^^^^
24-
:ref:`partitioning
24+
:ref:`partitioning`
2525

26-
The phase is optional and enabled by the user. It instructs the compiler to seperate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
27-
Criteria for seperation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
26+
The phase is optional and enabled by the user. It instructs the compiler to separate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
27+
Criteria for separation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
2828
run in PyTorch by the module fallback passes.
2929

3030
Conversion

docs/_sources/contributors/system_overview.rst.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ The repository is structured into:
2323
The C++ API is unstable and subject to change until the library matures, though most work is done under the hood in the core.
2424

2525
The core has a couple major parts: The top level compiler interface which coordinates ingesting a module, lowering,
26-
converting and generating a new module and returning it back to the user. The there are the three main phases of the
26+
converting and generating a new module and returning it back to the user. There are the three main phases of the
2727
compiler, the lowering phase, the conversion phase, and the execution phase.
2828

2929
.. include:: phases.rst

docs/contributors/conversion.html

+8-2
Original file line numberDiff line numberDiff line change
@@ -532,7 +532,7 @@ <h1 id="contributors-conversion--page-root">
532532
<ul>
533533
<li>
534534
<p>
535-
In this case the ITensor of the output has added to the to the
535+
In this case the ITensor of the output has added to the
536536
<code class="docutils literal notranslate">
537537
<span class="pre">
538538
value_tensor_map
@@ -570,8 +570,14 @@ <h1 id="contributors-conversion--page-root">
570570
<li>
571571
<p>
572572
The input is from a node that has not been converted
573-
* Torch-TensorRT will error out here
574573
</p>
574+
<ul>
575+
<li>
576+
<p>
577+
Torch-TensorRT will error out here
578+
</p>
579+
</li>
580+
</ul>
575581
</li>
576582
</ul>
577583
<section id="node-evaluation">

docs/contributors/lowering.html

+3-3
Original file line numberDiff line numberDiff line change
@@ -912,9 +912,9 @@ <h3 id="module-fallback">
912912
<blockquote>
913913
<div>
914914
<p>
915-
<cite>
916-
Torch-TensorRT/core/lowering/passes/module_fallback.cpp &lt;https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp&gt;
917-
</cite>
915+
<a class="reference external" href="https://github.com/nvidia/Torch-TensorRT/blob/master/core/lowering/passes/module_fallback.cpp">
916+
Torch-TensorRT/core/lowering/passes/module_fallback.cpp
917+
</a>
918918
</p>
919919
</div>
920920
</blockquote>

docs/contributors/partitioning.html

+2-2
Original file line numberDiff line numberDiff line change
@@ -468,8 +468,8 @@ <h1 id="contributors-partitioning--page-root">
468468
</a>
469469
</h1>
470470
<p>
471-
The phase is optional and enabled by the user. It instructs the compiler to seperate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
472-
Criteria for seperation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
471+
The phase is optional and enabled by the user. It instructs the compiler to separate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
472+
Criteria for separation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
473473
run in PyTorch by the module fallback passes.
474474
</p>
475475
</section>

docs/contributors/phases.html

+5-7
Original file line numberDiff line numberDiff line change
@@ -520,17 +520,15 @@ <h2 id="partitioning">
520520
</a>
521521
</h2>
522522
<p>
523-
:ref:
524-
<a href="#id1">
525-
<span class="problematic" id="id2">
526-
`
523+
<a class="reference internal" href="partitioning.html#partitioning">
524+
<span class="std std-ref">
525+
Partitioning Phase
527526
</span>
528527
</a>
529-
partitioning
530528
</p>
531529
<p>
532-
The phase is optional and enabled by the user. It instructs the compiler to seperate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
533-
Criteria for seperation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
530+
The phase is optional and enabled by the user. It instructs the compiler to separate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
531+
Criteria for separation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
534532
run in PyTorch by the module fallback passes.
535533
</p>
536534
</section>

docs/contributors/system_overview.html

+6-8
Original file line numberDiff line numberDiff line change
@@ -617,7 +617,7 @@ <h1 id="contributors-system-overview--page-root">
617617
</p>
618618
<p>
619619
The core has a couple major parts: The top level compiler interface which coordinates ingesting a module, lowering,
620-
converting and generating a new module and returning it back to the user. The there are the three main phases of the
620+
converting and generating a new module and returning it back to the user. There are the three main phases of the
621621
compiler, the lowering phase, the conversion phase, and the execution phase.
622622
</p>
623623
<section id="compiler-phases">
@@ -657,17 +657,15 @@ <h3 id="partitioning">
657657
</a>
658658
</h3>
659659
<p>
660-
:ref:
661-
<a href="#id2">
662-
<span class="problematic" id="id3">
663-
`
660+
<a class="reference internal" href="partitioning.html#partitioning">
661+
<span class="std std-ref">
662+
Partitioning Phase
664663
</span>
665664
</a>
666-
partitioning
667665
</p>
668666
<p>
669-
The phase is optional and enabled by the user. It instructs the compiler to seperate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
670-
Criteria for seperation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
667+
The phase is optional and enabled by the user. It instructs the compiler to separate nodes into ones that should run in PyTorch and ones that should run in TensorRT.
668+
Criteria for separation include: Lack of a converter, operator is explicitly set to run in PyTorch by the user or the node has a flag which tells partitioning to
671669
run in PyTorch by the module fallback passes.
672670
</p>
673671
</section>

docs/py_api/ts.html

+1-1
Original file line numberDiff line numberDiff line change
@@ -2066,7 +2066,7 @@ <h2 id="functions">
20662066
at
20672067
</span>
20682068
<span class="pre">
2069-
0x7fc5e1feb7f0&gt;
2069+
0x7f24b4674ef0&gt;
20702070
</span>
20712071
</span>
20722072
</span>

docs/searchindex.js

+1-1
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)