Skip to content

User/orilevari/windowsai master merge #2674

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 61 commits into from
Dec 17, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
b50878d
Disable Attention fusion tests when DISABLE_CONTRIB_OPS is defined (#…
tianleiwu Dec 3, 2019
178d059
Setup java ci (#2528)
shahasad Dec 3, 2019
5c2e474
Add provision in ORT for session options to be parsed when available …
hariharans29 Dec 4, 2019
d748f89
Revert "Disable thread pool creation when enabled OpenMP (#2485)" (#2…
fs-eire Dec 4, 2019
293b154
Add dynamic shape support in TensorRT execution provider (#2450)
stevenlix Dec 4, 2019
3e7aaf8
User/xianz/telemetry (#2458)
zhangxiang1993 Dec 4, 2019
be56d77
Fix integer overflow in cuda NonMaxSuppression implementation (#2540)
fs-eire Dec 4, 2019
d34fb62
Introduce container type runtime checks and other improvements (#2522)
yuslepukhin Dec 5, 2019
281933f
Fix C API tests for centos and mac (#2544)
askhade Dec 5, 2019
bec4abf
Add back executable bit to build.py
snnn Dec 4, 2019
53a6bc2
Fix a bug handling negative begin pad values in Pad op (#2550)
hariharans29 Dec 5, 2019
4c996a8
DNNL CMAKE update (#2548)
sreekanth-yalachigere Dec 5, 2019
ace132f
Fix android build (#2558)
snnn Dec 5, 2019
854362c
Update win-x86-ci.yml (#2557)
RyanUnderhill Dec 6, 2019
7eddac1
Re-enable Windows C# tests (#2564)
snnn Dec 6, 2019
73c682b
disable onnx_test_runner -x invocations for dnnl (#2568)
jywu-msft Dec 6, 2019
038ee91
Allow sequence length to be symbolic (#2559)
tianleiwu Dec 6, 2019
eeb28a8
setup java ci mac (#2570)
shahasad Dec 6, 2019
34beafc
make layernorm fusion to support opset 11 (#2545)
yufenglee Dec 6, 2019
262ee9d
Fix a warning found in the latest VS release
snnn Dec 6, 2019
5575766
Add more check on SkipLayerNorm and BiasGelu fusion (#2574)
yufenglee Dec 6, 2019
79847f3
Fix file not found error during docker build. (#2569)
Exlsunshine Dec 7, 2019
c06dbd8
Add ConvTranspose1D (#2578)
askhade Dec 7, 2019
cbc398b
Ryanunderhill/packagename test (#2582)
RyanUnderhill Dec 7, 2019
0f12346
[Nuphar EP] fixes for some object detection models (#2581)
Dec 7, 2019
200f4b4
EmbedLayerNormalization Fusion Improvement (#2553)
liuziyue Dec 8, 2019
36eb177
Update version (#2584)
RyanUnderhill Dec 9, 2019
0ab5452
Temporarily exclude vgg19 test from Python backend test
HectorSVC Dec 9, 2019
62de8fa
Update docs for Android NNAPI EP (#2586)
daquexian Dec 9, 2019
6e08efa
Fix lto bug for protobuf and ubuntu
snnn Dec 5, 2019
41fc820
add path to build dir before test run (#2590)
shahasad Dec 10, 2019
7809970
Add missig env variables for mac pipeline test (#2595)
askhade Dec 10, 2019
b0128a4
Fixed an issue in updating realized dims (#2597)
yangchen-MS Dec 10, 2019
35ceb1a
Java API for onnxruntime (#2215)
Craigacp Dec 10, 2019
6858f0a
Add support for opset 11 in reshape fusion (#2592)
tianleiwu Dec 10, 2019
796948c
Rename automl python tools folder to featurizer_ops. (#2593)
yuslepukhin Dec 10, 2019
bc89ecc
Support opset 11 subgraph of Squad model in Embed Layer Normalization…
tianleiwu Dec 10, 2019
45babd6
symbolic shape inference: fix warnings in GPT-2 model (#2608)
Dec 11, 2019
2ca9733
Dump subgraph ID and fused graph ID (#2607)
yangchen-MS Dec 11, 2019
1ee250d
EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613)
liuziyue Dec 11, 2019
8729784
Allow providers to be set for InferenceSession at construction (#2606)
EricCousineau-TRI Dec 11, 2019
b2d65b4
Remove unnecessary parameter in some places in GatherElements impleme…
hariharans29 Dec 11, 2019
6859d92
Make sure fenced tensor could not reuse other tensor. (#2561)
zhanghuanrong Dec 11, 2019
c04647b
Improve Embed Layer Norm Fusion for SQuAD with static input shape (#…
tianleiwu Dec 11, 2019
d6f33dc
fix float16 comparison in initializer (#2629)
yufenglee Dec 12, 2019
c7cd336
epsilon attribute for layernormalization fusion (#2639)
liuziyue Dec 12, 2019
4dbf944
removed unnecessary batch file and fix path (#2640)
shahasad Dec 12, 2019
ac08b58
Add shape inference to ConvTransposeWithDynamicPads schema (#2632)
jeffbloo Dec 12, 2019
8631b70
Improve cuda expand() opeator's performance. (#2624)
zhanghuanrong Dec 13, 2019
e31be23
Cuda pad optimize when no padding is needed. (#2625)
zhanghuanrong Dec 13, 2019
fff1ed9
Optimize cuda scatter() on 2D compatible. (#2628)
zhanghuanrong Dec 13, 2019
6e68007
fix build error for ARM (#2648)
HectorSVC Dec 13, 2019
1996129
Improve performance of resize() in Nearest mode (#2626)
zhanghuanrong Dec 13, 2019
18bdde3
Fix memory exception in Layer Norm Fusion (#2644)
tianleiwu Dec 13, 2019
a46a28b
Windows CI changes(#2650)
snnn Dec 13, 2019
7c87070
Import Featurizers (#2643)
yuslepukhin Dec 14, 2019
f741289
added cache version for nuphar JIT binaries (#2646)
yangchen-MS Dec 15, 2019
47503ec
Initiate the build scripts for ARM ACL (#2652)
HectorSVC Dec 16, 2019
c907881
MLAS: optimize QuantizeLinear (#2660)
tracysh Dec 16, 2019
447bb2c
Merge branch 'master' into user/orilevari/windowsai_master_merge
Dec 16, 2019
d3b0cd6
dereference sessionstate now that it is a unique pointer
Dec 17, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,8 @@ onnxprofile_profile_test_*.json
/csharp/packages
/csharp/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.targets
/csharp/src/Microsoft.ML.OnnxRuntime/Microsoft.ML.OnnxRuntime.props
cmake/external/FeaturizersLibrary/
# Java specific ignores
java/src/main/native/ai_onnxruntime_*.h
java/.gradle

10 changes: 6 additions & 4 deletions .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,12 @@
[submodule "cmake/external/cub"]
path = cmake/external/cub
url = https://github.com/NVlabs/cub.git
[submodule "cmake/external/onnx-tensorrt"]
path = cmake/external/onnx-tensorrt
url = https://github.com/onnx/onnx-tensorrt.git
[submodule "cmake/external/wil"]
path = cmake/external/wil
url = https://github.com/microsoft/wil

[submodule "cmake/external/onnx-tensorrt"]
path = cmake/external/onnx-tensorrt
url = https://github.com/onnx/onnx-tensorrt.git
[submodule "cmake/external/json"]
path = cmake/external/json
url = https://github.com/nlohmann/json
42 changes: 29 additions & 13 deletions BUILD.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ The complete list of build options can be found by running `./build.sh (or .\bui
* [Intel DNNL/MKL-ML](#DNNL-and-MKLML)
* [Intel nGraph](#nGraph)
* [Intel OpenVINO](#openvino)
* [Android NNAPI](#Android)
* [Android NNAPI](#Android-NNAPI)
* [Nuphar Model Compiler](#Nuphar)
* [DirectML](#DirectML)

Expand All @@ -112,6 +112,7 @@ The complete list of build options can be found by running `./build.sh (or .\bui
**Architectures**
* [x86](#x86)
* [ARM](#ARM)
* [Android](#Android)

---

Expand Down Expand Up @@ -190,7 +191,7 @@ See more information on the TensorRT Execution Provider [here](./docs/execution_
* The path to the CUDA `bin` directory must be added to the PATH environment variable so that `nvcc` is found.
* The path to the cuDNN installation (path to folder that contains libcudnn.so) must be provided via the cuDNN_PATH environment variable, or `--cudnn_home parameter`.
* Install [TensorRT](https://developer.nvidia.com/nvidia-tensorrt-download)
* The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 6.0.1.5 but validated with the feature set equivalent to TensorRT 5. Some TensorRT 6 new features such as dynamic shape is not available at this time.
* The TensorRT execution provider for ONNX Runtime is built and tested with TensorRT 6.0.1.5.
* The path to TensorRT installation must be provided via the `--tensorrt_home parameter`.

#### Build Instructions
Expand Down Expand Up @@ -277,18 +278,9 @@ For more information on OpenVINO Execution Provider's ONNX Layer support, To

---

### Android

#### Cross compiling on Linux

1. Get Android NDK from https://developer.android.com/ndk/downloads. Please unzip it after downloading.
### Android NNAPI

2. Get a pre-compiled protoc from [here](https://github.com/protocolbuffers/protobuf/releases/download/v3.6.1/protoc-3.6.1-linux-x86_64.zip). Please unzip it after downloading.

3. Denote the unzip destination in step 1 as $ANDROID_NDK, append `-DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake -DANDROID_ABI=arm64-v8a -DONNX_CUSTOM_PROTOC_EXECUTABLE=path/to/protoc` to your cmake args, run cmake and make to build it.

#### Notes
* For 32-bit devices, replace `-DANDROID_ABI=arm64-v8a` with `-DANDROID_ABI=armeabi-v7a`.
See information on the NNAPI Execution Provider [here](./docs/execution_providers/NNAPI-ExecutionProvider.md).

---

Expand Down Expand Up @@ -541,3 +533,27 @@ ls -l /code/onnxruntime/build/Linux/MinSizeRel/dist/*.whl

2. Use `.\build.bat` and specify `--arm` or `--arm64` as the build option to start building. Preferably use `Developer Command Prompt for VS` or make sure all the installed cross-compilers are findable from the command prompt being used to build using the PATH environmant variable.

---

### Android

#### Pre-Requisites

Install Android NDK from https://developer.android.com/ndk/downloads

#### Build Instructions

##### Cross compiling on Windows

```bash
./build.bat --android --android_ndk_path <android ndk path> --android_abi <android abi, e.g., arm64-v8a (default) or armeabi-v7a> --android_api <android api level, e.g., 27 (default)>
```

##### Cross compiling on Linux

```bash
./build.sh --android --android_ndk_path <android ndk path> --android_abi <android abi, e.g., arm64-v8a (default) or armeabi-v7a> --android_api <android api level, e.g., 27 (default)>
```

If you want to use NNAPI Execution Provider on Android, see [docs/execution_providers/NNAPI-ExecutionProvider.md](/docs/execution_providers/NNAPI-ExecutionProvider.md).

2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ Additional dockerfiles can be found [here](./dockerfiles).
* [C](docs/C_API.md)
* [C#](docs/CSharp_API.md)
* [C++](./include/onnxruntime/core/session/onnxruntime_cxx_api.h)
* [Java](docs/Java_API.md)
* [Ruby](https://github.com/ankane/onnxruntime) (external project)

### Official Builds
Expand All @@ -107,6 +108,7 @@ system.
* Version: **CUDA 10.0** and **cuDNN 7.6**
* Older ONNX Runtime releases: used **CUDA 9.1** and **cuDNN 7.1** - please refer to [prior release notes](https://github.com/microsoft/onnxruntime/releases) for more details.
* Python binaries are compatible with **Python 3.5-3.7**. See [Python Dev Notes](./docs/Python_Dev_Notes.md). If using `pip` to be download the Python binaries, run `pip install --upgrade pip` prior to downloading.
* The Java API is compatible with **Java 8-13**.
* Certain operators makes use of system locales. Installation of the **English language package** and configuring `en_US.UTF-8 locale` is required.
* For Ubuntu install [language-pack-en package](https://packages.ubuntu.com/search?keywords=language-pack-en)
* Run the following commands:
Expand Down
28 changes: 27 additions & 1 deletion ThirdPartyNotices.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3794,4 +3794,30 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
SOFTWARE

-----

nlohmann/json

MIT License

Copyright (c) 2013-2019 Niels Lohmann

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
2 changes: 1 addition & 1 deletion VERSION_NUMBER
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.0.0
1.1.0
18 changes: 18 additions & 0 deletions cgmanifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -437,6 +437,24 @@
},
"type": "git"
}
},
{
"component": {
"git": {
"commitHash": "d98bf0278d6f59a58271425963a8422ff48fe249",
"repositoryUrl": "https://github.com/nlohmann/json.git"
},
"type": "git"
}
}
{
"component": {
"git": {
"commitHash": "006df6bff45dac59d378609fe85f6736a901ee93",
"repositoryUrl": "https://github.com/microsoft/FeaturizersLibrary.git"
},
"type": "git"
}
}
],
"Version": 1
Expand Down
25 changes: 22 additions & 3 deletions cmake/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -84,8 +84,8 @@ option(onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS "Dump node input shapes and output
option(onnxruntime_USE_DML "Build with DirectML support" OFF)
option(onnxruntime_USE_WINML "Build with WinML support" OFF)
option(onnxruntime_USE_ACL "Build with ACL support" OFF)
option(onnxruntime_USE_TELEMETRY "Build with Telemetry" OFF)
option(onnxruntime_ENABLE_INSTRUMENT "Enable Instrument with Event Tracing for Windows (ETW)" OFF)
option(onnxruntime_USE_TELEMETRY "Build with Telemetry" OFF)

set(protobuf_BUILD_TESTS OFF CACHE BOOL "Build protobuf tests" FORCE)
#nsync tests failed on Mac Build
Expand All @@ -101,6 +101,13 @@ if(NOT WIN32)
message(WARNING "Instrument is only supported on Windows now")
set(onnxruntime_ENABLE_INSTRUMENT OFF)
endif()
else()
check_cxx_compiler_flag(/d2FH4- HAS_D2FH4)
if (HAS_D2FH4)
message("Enabling /d2FH4-")
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /d2FH4-")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /d2FH4-")
endif()
endif()

if(onnxruntime_USE_OPENMP)
Expand Down Expand Up @@ -282,6 +289,12 @@ else()
add_library(protobuf::libprotobuf ALIAS libprotobuf-lite)
endif()
add_executable(protobuf::protoc ALIAS protoc)

if(UNIX AND onnxruntime_ENABLE_LTO)
#https://github.com/protocolbuffers/protobuf/issues/5923
target_link_options(protoc PRIVATE "-Wl,--no-as-needed")
endif()

include(protobuf_function.cmake)

if (onnxruntime_DISABLE_CONTRIB_OPS)
Expand Down Expand Up @@ -703,8 +716,8 @@ include(onnxruntime_mlas.cmake)

if(onnxruntime_USE_AUTOML)
add_definitions(-DMICROSOFT_AUTOML)
# Build shared featurizer library
include(onnxruntime_automl_featurizers.cmake)
# Fetch and build featurizers
include(external/featurizers.cmake)
endif()

if(WIN32)
Expand Down Expand Up @@ -743,6 +756,11 @@ if (onnxruntime_BUILD_SERVER)
include(onnxruntime_server.cmake)
endif()

if (onnxruntime_BUILD_JAVA)
message(STATUS "Java Build is enabled")
include(onnxruntime_java.cmake)
endif()

# some of the tests rely on the shared libs to be
# built; hence the ordering
if (onnxruntime_BUILD_UNIT_TESTS)
Expand Down Expand Up @@ -773,3 +791,4 @@ if (onnxruntime_BUILD_CSHARP)
# set_property(GLOBAL PROPERTY VS_DOTNET_TARGET_FRAMEWORK_VERSION "netstandard2.0")
include(onnxruntime_csharp.cmake)
endif()

2 changes: 1 addition & 1 deletion cmake/external/dnnl.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ if (onnxruntime_USE_DNNL)
GIT_TAG ${DNNL_TAG}
# PATCH_COMMAND ${MKLDNN_PATCH_DISCARD_COMMAND} COMMAND ${DNNL_PATCH_COMMAND}
SOURCE_DIR ${DNNL_SOURCE}
CMAKE_ARGS -DDNNL_PRODUCT_BUILD_MODE=OFF -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} -DCMAKE_INSTALL_PREFIX=${DNNL_INSTALL} -DMKLROOT=${MKML_DIR}
CMAKE_ARGS -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE} -DCMAKE_INSTALL_PREFIX=${DNNL_INSTALL}
)
link_directories(${DNNL_LIB_DIR})
#if (onnxruntime_USE_MKLML)
Expand Down
85 changes: 85 additions & 0 deletions cmake/external/featurizers.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
# This source code should not depend on the onnxruntime and may be built independently

set(featurizers_URL "https://github.com/microsoft/FeaturizersLibrary.git")
set(featurizers_TAG "006df6bff45dac59d378609fe85f6736a901ee93")

set(featurizers_pref FeaturizersLibrary)
set(featurizers_ROOT ${PROJECT_SOURCE_DIR}/external/${featurizers_pref})
set(featurizers_BINARY_DIR ${CMAKE_CURRENT_BINARY_DIR}/external/${featurizers_pref})

# Only due to GIT_CONFIG
# Uncoment UPDATE_COMMAND if you work locally
# on the featurizers so cmake does not undo your changes.
if (WIN32)
ExternalProject_Add(featurizers_lib
PREFIX ${featurizers_pref}
GIT_REPOSITORY ${featurizers_URL}
GIT_TAG ${featurizers_TAG}
# Need this to properly checkout crlf
GIT_CONFIG core.autocrlf=input
SOURCE_DIR ${featurizers_ROOT}
# Location of CMakeLists.txt
SOURCE_SUBDIR src/Featurizers
BINARY_DIR ${featurizers_BINARY_DIR}
# UPDATE_COMMAND ""
INSTALL_COMMAND ""
)
else()
ExternalProject_Add(featurizers_lib
PREFIX ${featurizers_pref}
GIT_REPOSITORY ${featurizers_URL}
GIT_TAG ${featurizers_TAG}
SOURCE_DIR ${featurizers_ROOT}
# Location of CMakeLists.txt
SOURCE_SUBDIR src/Featurizers
BINARY_DIR ${featurizers_BINARY_DIR}
CMAKE_ARGS -DCMAKE_POSITION_INDEPENDENT_CODE=ON
# UPDATE_COMMAND ""
INSTALL_COMMAND ""
)
endif()

add_library(automl_featurizers STATIC IMPORTED)
add_dependencies(automl_featurizers featurizers_lib)
target_include_directories(automl_featurizers INTERFACE ${featurizers_ROOT}/src)

if(MSVC)
set_property(TARGET automl_featurizers PROPERTY IMPORTED_LOCATION
${CMAKE_CURRENT_BINARY_DIR}/external/${featurizers_pref}/${CMAKE_BUILD_TYPE}/FeaturizersCode.lib)
else()
set_property(TARGET automl_featurizers PROPERTY IMPORTED_LOCATION
${CMAKE_CURRENT_BINARY_DIR}/external/${featurizers_pref}/libFeaturizersCode.a)
endif()

if (WIN32)
# Add Code Analysis properties to enable C++ Core checks. Have to do it via a props file include.
set_target_properties(automl_featurizers PROPERTIES VS_USER_PROPS ${PROJECT_SOURCE_DIR}/ConfigureVisualStudioCodeAnalysis.props)
endif()

# Build this in CentOS
# foreach(_test_name IN ITEMS
# CatImputerFeaturizer_UnitTests
# DateTimeFeaturizer_UnitTests
# HashOneHotVectorizerFeaturizer_UnitTests
# ImputationMarkerFeaturizer_UnitTests
# LabelEncoderFeaturizer_UnitTests
# MaxAbsScalarFeaturizer_UnitTests
# MinMaxScalarFeaturizer_UnitTests
# MissingDummiesFeaturizer_UnitTests
# OneHotEncoderFeaturizer_UnitTests
# RobustScalarFeaturizer_UnitTests
# SampleAddFeaturizer_UnitTest
# StringFeaturizer_UnitTest
# Structs_UnitTest
# TimeSeriesImputerFeaturizer_UnitTest
# )
# add_executable(${_test_name} ${featurizers_ROOT}/src/Featurizers/UnitTests/${_test_name}.cpp)
# add_dependencies(${_test_name} automl_featurizers)
# target_include_directories(${_test_name} PRIVATE ${featurizers_ROOT}/src)
# target_link_libraries(${_test_name} automl_featurizers)
# list(APPEND featurizers_TEST_SRC ${featurizers_ROOT}/src/Featurizers/UnitTests/${_test_name}.cpp)
# endforeach()

# source_group(TREE ${featurizers_ROOT}/src/Featurizers/UnitTests FILES ${featurizers_TEST_SRC})
1 change: 1 addition & 0 deletions cmake/external/json
Submodule json added at d98bf0
46 changes: 0 additions & 46 deletions cmake/onnxruntime_automl_featurizers.cmake

This file was deleted.

Loading