Error building TensorFlow 2.8 in Windows 10

Hello,

I am getting warnings and errors when trying to build TensorFlow 2.8 for CPU and CUDA.
Please direct me to the solution.

Thank you,
Paul Bagwell

My installed versions are:
TensorFlow source, git branch r2.8
Windows 10, version 20H2 Build 19042.1526
Cuda Toolkit 11.2
cuDNN 8.1
Bazel 4.2.1
Python 3.9.10
MSYS2 64bit
Visual Studio Community 2019
nVidia RTX A2000, compute capability 8.6


Here is the error when building for CPU:

  1. ERROR: An error occurred during the fetch of repository ‘llvm-project’:

Here are the warnings and errors when building for CUDA:

  1. WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/9851cf287f7ac21db2f4baeae7fb3165dec2e1b7.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found

  2. ERROR: An error occurred during the fetch of repository ‘local_config_cuda’:

  3. ERROR: C:/users/pbagwell/repos/github/tensorflow/WORKSPACE:15:14: fetching cuda_configure rule //external:local_config_cuda: Traceback (most recent call last):

  4. Error in fail: Repository command failed
    Could not find any cublas_api.h matching version ‘’ in any subdirectory:


Here is the log of the command prompt when building for CPU:

C:\Users\pbagwell\repos\GitHub\tensorflow>bazel build //tensorflow/tools/pip_package:build_pip_package
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
  Inherited 'common' options: --isatty=1 --terminal_columns=143
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.bazelrc:
  Inherited 'common' options: --experimental_repo_remote_exec
INFO: Options provided by the client:
  'build' options: --python_path=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.bazelrc:
  'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.tf_configure.bazelrc:
  'build' options: --action_env PYTHON_BIN_PATH=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe --action_env PYTHON_LIB_PATH=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/lib/site-packages --python_path=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions --define=override_eigen_strong_inline=true
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.bazelrc:
  'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/fallback,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:windows in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --copt=/W0 --copt=/D_USE_MATH_DEFINES --host_copt=/D_USE_MATH_DEFINES --cxxopt=/std:c++14 --host_cxxopt=/std:c++14 --config=monolithic --copt=-DWIN32_LEAN_AND_MEAN --host_copt=-DWIN32_LEAN_AND_MEAN --copt=-DNOGDI --host_copt=-DNOGDI --copt=/experimental:preprocessor --host_copt=/experimental:preprocessor --linkopt=/DEBUG --host_linkopt=/DEBUG --linkopt=/OPT:REF --host_linkopt=/OPT:REF --linkopt=/OPT:ICF --host_linkopt=/OPT:ICF --verbose_failures --features=compiler_param_file --distinct_host_configuration=false
INFO: Found applicable config definition build:monolithic in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --define framework_shared_object=false
INFO: Repository llvm-project instantiated at:
  C:/users/pbagwell/repos/github/tensorflow/WORKSPACE:15:14: in <toplevel>
  C:/users/pbagwell/repos/github/tensorflow/tensorflow/workspace2.bzl:888:21: in workspace
  C:/users/pbagwell/repos/github/tensorflow/tensorflow/workspace2.bzl:526:15: in _tf_repositories
  C:/users/pbagwell/repos/github/tensorflow/third_party/llvm/setup.bzl:22:19: in llvm_setup
Repository rule llvm_configure defined at:
  C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/configure.bzl:83:33: in <toplevel>
ERROR: An error occurred during the fetch of repository 'llvm-project':
   Traceback (most recent call last):
        File "C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/configure.bzl", line 73, column 25, in _llvm_configure_impl
                _overlay_directories(repository_ctx)
        File "C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/configure.bzl", line 62, column 13, in _overlay_directories
                fail(("Failed to execute overlay script: '{cmd}'\n" +
Error in fail: Failed to execute overlay script: 'C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/overlay_directories.py --src C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw --overlay C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/llvm-project-overlay --target .'
Exited with code 1
stdout:

stderr:
Traceback (most recent call last):
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 92, in <module>
    main(parse_arguments())
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 80, in main
    _symlink_abs(os.path.join(args.overlay, relpath),
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 64, in _symlink_abs
    os.symlink(os.path.abspath(from_path), os.path.abspath(to_path))
OSError: [WinError 1314] A required privilege is not held by the client: 'C:\\users\\pbagwell\\_bazel_pbagwell\\bmn5pjkz\\external\\llvm-raw\\utils\\bazel\\llvm-project-overlay\\.bazelignore' -> 'C:\\users\\pbagwell\\_bazel_pbagwell\\bmn5pjkz\\external\\llvm-project\\.bazelignore'

ERROR: Error fetching repository: Traceback (most recent call last):
        File "C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/configure.bzl", line 73, column 25, in _llvm_configure_impl
                _overlay_directories(repository_ctx)
        File "C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/configure.bzl", line 62, column 13, in _overlay_directories
                fail(("Failed to execute overlay script: '{cmd}'\n" +
Error in fail: Failed to execute overlay script: 'C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/overlay_directories.py --src C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw --overlay C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/llvm-project-overlay --target .'
Exited with code 1
stdout:

stderr:
Traceback (most recent call last):
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 92, in <module>
    main(parse_arguments())
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 80, in main
    _symlink_abs(os.path.join(args.overlay, relpath),
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 64, in _symlink_abs
    os.symlink(os.path.abspath(from_path), os.path.abspath(to_path))
OSError: [WinError 1314] A required privilege is not held by the client: 'C:\\users\\pbagwell\\_bazel_pbagwell\\bmn5pjkz\\external\\llvm-raw\\utils\\bazel\\llvm-project-overlay\\.bazelignore' -> 'C:\\users\\pbagwell\\_bazel_pbagwell\\bmn5pjkz\\external\\llvm-project\\.bazelignore'

INFO: Repository flatbuffers instantiated at:
  C:/users/pbagwell/repos/github/tensorflow/WORKSPACE:15:14: in <toplevel>
  C:/users/pbagwell/repos/github/tensorflow/tensorflow/workspace2.bzl:881:28: in workspace
  C:/users/pbagwell/repos/github/tensorflow/tensorflow/workspace2.bzl:66:16: in _initialize_third_party
  C:/users/pbagwell/repos/github/tensorflow/third_party/flatbuffers/workspace.bzl:6:20: in repo
  C:/users/pbagwell/repos/github/tensorflow/third_party/repo.bzl:128:21: in tf_http_archive
Repository rule _tf_http_archive defined at:
  C:/users/pbagwell/repos/github/tensorflow/third_party/repo.bzl:81:35: in <toplevel>
ERROR: C:/users/pbagwell/repos/github/tensorflow/tensorflow/tools/pip_package/BUILD:273:10: //tensorflow/tools/pip_package:build_pip_package depends on //tensorflow/compiler/mlir/tensorflow:gen_mlir_passthrough_op_py in repository @ which failed to fetch. no such package '@llvm-project//mlir': Failed to execute overlay script: 'C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/overlay_directories.py --src C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw --overlay C:/users/pbagwell/_bazel_pbagwell/bmn5pjkz/external/llvm-raw/utils/bazel/llvm-project-overlay --target .'
Exited with code 1
stdout:

stderr:
Traceback (most recent call last):
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 92, in <module>
    main(parse_arguments())
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 80, in main
    _symlink_abs(os.path.join(args.overlay, relpath),
  File "C:\users\pbagwell\_bazel_pbagwell\bmn5pjkz\external\llvm-raw\utils\bazel\overlay_directories.py", line 64, in _symlink_abs
    os.symlink(os.path.abspath(from_path), os.path.abspath(to_path))
OSError: [WinError 1314] A required privilege is not held by the client: 'C:\\users\\pbagwell\\_bazel_pbagwell\\bmn5pjkz\\external\\llvm-raw\\utils\\bazel\\llvm-project-overlay\\.bazelignore' -> 'C:\\users\\pbagwell\\_bazel_pbagwell\\bmn5pjkz\\external\\llvm-project\\.bazelignore'

ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: Analysis failed
INFO: Elapsed time: 3.724s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (50 packages loaded, 14 targets configured)
    currently loading: tensorflow/lite/python ... (2 packages)

Here is the log of the command prompt when building for CUDA:

 C:\Users\pbagwell\repos\GitHub\tensorflow>bazel build //tensorflow/tools/pip_package:build_pip_package
WARNING: Option 'java_toolchain' is deprecated
WARNING: Option 'host_java_toolchain' is deprecated
INFO: Options provided by the client:
  Inherited 'common' options: --isatty=1 --terminal_columns=143
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.bazelrc:
  Inherited 'common' options: --experimental_repo_remote_exec
INFO: Options provided by the client:
  'build' options: --python_path=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.bazelrc:
  'build' options: --define framework_shared_object=true --java_toolchain=//tensorflow/tools/toolchains/java:tf_java_toolchain --host_java_toolchain=//tensorflow/tools/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_cc_shared_library
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.tf_configure.bazelrc:
  'build' options: --action_env PYTHON_BIN_PATH=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe --action_env PYTHON_LIB_PATH=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/lib/site-packages --python_path=C:/Users/pbagwell/AppData/Local/Programs/Python/Python39/python.exe --action_env TF_CUDA_VERSION=11.2 --action_env TF_CUDNN_VERSION=8.1 --action_env TF_CUDA_PATHS=C:Program FilesNVIDIA GPU Computing ToolkitCUDAv11.2,C:Program FilesNVIDIACUDNNv8.1 --action_env CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.2 --action_env TF_CUDA_COMPUTE_CAPABILITIES=8.6 --config=cuda --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions --define=override_eigen_strong_inline=true
INFO: Reading rc options for 'build' from c:\users\pbagwell\repos\github\tensorflow\.bazelrc:
  'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
INFO: Found applicable config definition build:short_logs in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:cuda in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
INFO: Found applicable config definition build:windows in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --copt=/W0 --copt=/D_USE_MATH_DEFINES --host_copt=/D_USE_MATH_DEFINES --cxxopt=/std:c++14 --host_cxxopt=/std:c++14 --config=monolithic --copt=-DWIN32_LEAN_AND_MEAN --host_copt=-DWIN32_LEAN_AND_MEAN --copt=-DNOGDI --host_copt=-DNOGDI --copt=/experimental:preprocessor --host_copt=/experimental:preprocessor --linkopt=/DEBUG --host_linkopt=/DEBUG --linkopt=/OPT:REF --host_linkopt=/OPT:REF --linkopt=/OPT:ICF --host_linkopt=/OPT:ICF --verbose_failures --features=compiler_param_file --distinct_host_configuration=false
INFO: Found applicable config definition build:monolithic in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --define framework_shared_object=false
INFO: Repository local_config_cuda instantiated at:
  C:/users/pbagwell/repos/github/tensorflow/WORKSPACE:15:14: in <toplevel>
  C:/users/pbagwell/repos/github/tensorflow/tensorflow/workspace2.bzl:868:19: in workspace
  C:/users/pbagwell/repos/github/tensorflow/tensorflow/workspace2.bzl:96:19: in _tf_toolchains
Repository rule cuda_configure defined at:
  C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl:1448:33: in <toplevel>
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/9851cf287f7ac21db2f4baeae7fb3165dec2e1b7.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
ERROR: An error occurred during the fetch of repository 'local_config_cuda':
   Traceback (most recent call last):
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
                _create_local_cuda_repository(repository_ctx)
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 978, column 35, in _create_local_cuda_repository
                cuda_config = _get_cuda_config(repository_ctx, find_cuda_config_script)
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 666, column 30, in _get_cuda_config
                config = find_cuda_config(repository_ctx, find_cuda_config_script, ["cuda", "cudnn"])
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 643, column 41, in find_cuda_config
                exec_result = _exec_find_cuda_config(repository_ctx, script_path, cuda_libraries)
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 637, column 19, in _exec_find_cuda_config
                return execute(repository_ctx, [python_bin, "-c", decompress_and_execute_cmd])
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/remote_config/common.bzl", line 230, column 13, in execute
                fail(
Error in fail: Repository command failed
Could not find any cublas_api.h matching version '' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
of:
ERROR: C:/users/pbagwell/repos/github/tensorflow/WORKSPACE:15:14: fetching cuda_configure rule //external:local_config_cuda: Traceback (most recent call last):
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
                _create_local_cuda_repository(repository_ctx)
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 978, column 35, in _create_local_cuda_repository
                cuda_config = _get_cuda_config(repository_ctx, find_cuda_config_script)
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 666, column 30, in _get_cuda_config
                config = find_cuda_config(repository_ctx, find_cuda_config_script, ["cuda", "cudnn"])
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 643, column 41, in find_cuda_config
                exec_result = _exec_find_cuda_config(repository_ctx, script_path, cuda_libraries)
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/gpus/cuda_configure.bzl", line 637, column 19, in _exec_find_cuda_config
                return execute(repository_ctx, [python_bin, "-c", decompress_and_execute_cmd])
        File "C:/users/pbagwell/repos/github/tensorflow/third_party/remote_config/common.bzl", line 230, column 13, in execute
                fail(
Error in fail: Repository command failed
Could not find any cublas_api.h matching version '' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
of:
INFO: Found applicable config definition build:cuda in file c:\users\pbagwell\repos\github\tensorflow\.bazelrc: --repo_env TF_NEED_CUDA=1 --crosstool_top=@local_config_cuda//crosstool:toolchain --@local_config_cuda//:enable_cuda
WARNING: Option 'java_toolchain' is deprecated
WARNING: Option 'host_java_toolchain' is deprecated
ERROR: @local_config_cuda//:enable_cuda :: Error loading option @local_config_cuda//:enable_cuda: Repository command failed
Could not find any cublas_api.h matching version '' in any subdirectory:
        ''
        'include'
        'include/cuda'
        'include/*-linux-gnu'
        'extras/CUPTI/include'
        'include/cuda/CUPTI'
of:

Tensorflow build also needed this environment variable:

CUDNN_INSTALL_PATH=C:/Program Files/NVIDIA/CUDNN/v8.1

Tensorflow build also needed this environment variable:

CUDNN_INSTALL_PATH=C:/Program Files/NVIDIA/CUDNN/v8.1

Tensorflow build also need Developer Mode to be turned on. Then the build was successful.

Build the TensorFlow C++ DLL with this command:

bazel build tensorflow:tensorflow.dll

The command prompt must be “Run as Admin” for the bazel build of TensorFlow to be successful.

Helped a lot. Thanks for the follow up.

I get an error when I try using this.

ERROR: An error occurred during the fetch of repository ‘llvm-project’:

Followed by quite a bit of other stuff, but I can’t post the whole thing because I’m a new user. Do you have any idea what might cause that error?

Build from source TensorFlow 2.8 C/C++ DLLs on Windows 10 & 11.

Here is my latest notes to build TensorFlow C/C++ DLLs on Windows.

  1. Set Windows 10 to Developer Mode !!!
    Why: Symlinks in Windows 10! - Windows Developer Blog
    How: Enable your device for development - Windows apps | Microsoft Docs
  2. Set Windows Registry to long path names (Python scripts reference long path names):
    See: Maximum Path Length Limitation - Win32 apps | Microsoft Docs

RegEdit
Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
LongPathsEnabled = 1

  1. Follow the steps to install and build TensorFlow from this link:
    Build from source on Windows  |  TensorFlow
    a. Install Visual Studio Community 2019
    i. Check Desktop development with C++
    ii. Check C++ Clang tools for Windows
    iii. Check Individual components: Git for Windows
    iv. Check Individual components: GitHub extension for Visual Studio
    b. Install Go Programming Language
    c. Install Bazelisk
    GitHub - bazelbuild/bazelisk: A user-friendly launcher for Bazel.
    i. Set environment variable USE_BAZEL_VERSION=4.2.1
    ii. go install github.com/bazelbuild/bazelisk@latest
    iii. rename bazelisk.exe to bazel.exe
    iv. Make sure path to bazel.exe is added to PATH env var.
    v. Check version is 4.2.1: bazel –version
    d. Install Python 3.9.10 for Windows
    Python Releases for Windows | Python.org
    i. Check pip
    ii. Check py launcher
    iii. After installed, check version and where it is located:
    python –version
    py –version
    where python
    where py
    iv. Upgrade pip
    py -m pip install --upgrade pip
    e. Install the TensorFlow pip package dependencies:
    i. pip3 install -U six numpy wheel packaging
    ii. pip3 install -U keras_preprocessing --no-deps
    f. Install MSYS2. Add C:\msys64\usr\bin to PATH.
    g. Reopen command prompt and enter:
    pacman -S git patch unzip
    h. Install CUDA SDK 11.2 (if Windows 11, see below item iii.)
    i. Check tested build configurations
    Build from source  |  TensorFlow
    ii. Check version of CUDA:
    nvcc –version

iii. Note: If Windows 11 , install Cuda SDK 11.6.1, and CuDNN 8.3.3) Also download zlibwapi.dll and copy to C:\Windows\System32. See zlib required for cuDNN 8.3.0 on Windows · Issue #6226 · cupy/cupy · GitHub and Installation Guide :: NVIDIA Deep Learning cuDNN Documentation

  1. After you clone TensorFlow, git checkout r2.8

  2. Make sure to use the tensorFlow, python, compiler, bazel, cuDNN, and CUDA versions as listed in the table “Tested build configurations.” (Windows 11 and CUDA 11.6 is not listed as “tested” but seems to work.)
    Build from source on Windows  |  TensorFlow

  3. Configure the build for CPU:
    Build from source on Windows  |  TensorFlow
    If you configure for GPU, you will need to set the env variable with forward slashes:
    CUDNN_INSTALL_PATH=C:/Program Files/NVIDIA/CUDNN/v8.1

*Note: If Windows 11,
CUDNN_INSTALL_PATH=C:/Program Files/NVIDIA/CUDNN/v8.3

  1. Open the BUILD file to see the package names you can build:
    repos\GitHub\tensorflow\tensorflow\BUILD. You should see:
    tf_cc_shared_object(
    name = “tensorflow_cc”,

    genrule(
    name = “install_headers”,

The interface library (tensorflow_cc.dll.if.lib) for linking tensorflow DLL library (tensorflow_cc.dll) on Windows.

To learn more about import library (called interface library in Bazel):

Link an executable to a DLL | Microsoft Docs

filegroup(
name = “get_tensorflow_cc_dll_import_lib”,
srcs = ["//tensorflow:tensorflow_cc.dll"],
output_group = “interface_library”,
visibility = ["//visibility:public"],
)

Rename the import library for tensorflow.dll from tensorflow_cc.dll.if.lib to tensorflow.lib

genrule(
name = “tensorflow_cc_dll_import_lib”,
srcs = [":get_tensorflow_cc_dll_import_lib"],
outs = [“tensorflow_cc.lib”],
cmd = select({
“//tensorflow:windows”: “cp -f $< $@”,
“//conditions:default”: “touch $@”, # Just a placeholder for Unix platforms
}),
visibility = ["//visibility:public"],
)

genrule(
name = “install_headers”,
srcs = [
“//tensorflow/c:headers”,
“//tensorflow/c/eager:headers”,
“//tensorflow/cc:headers”,
“//tensorflow/core:headers”,
],
outs = [“include”],

  1. Disable anti- virus checker while compiling !!!

  2. Open the Command Prompt – Run as Admin !!!

  3. Change directory to:
    repos\GitHub\tensorflow

  4. Build the C/C++ Dynamic-link library (.dll):
    bazel build --config=opt tensorflow:tensorflow_cc

  5. Build the import library (.lib)
    bazel build --config=opt tensorflow:get_tensorflow_cc_dll_import_lib

  6. Rename the import library for tensorflow_cc.dll from tensorflow_cc.dll.if.lib to tensorflow_cc.lib
    bazel build --config=opt tensorflow:tensorflow_cc_dll_import_lib

  7. Install headers
    bazel build --config=opt tensorflow:install_headers

  8. Build an example project that links to tensorflow_cc.lib (see Ref1 or Ref2 reference below for example project)

  9. If you get linking errors, then follow this link below: (see also Ref2 and Ref3 for TF_EXPORT below)
    Tensorflow 2.3 unresolved external symbols in machine-generated files when building C++ project on Windows - Stack Overflow
    a. Add missing symbols to the file tensorflow_filtered_def_file.def
    b. Delete tensorflow_cc.dll and tensorflow_cc.lib (they are read-only).
    c. After you updated .def file, open the following command prompt:
    “x64 Native Tools Command Prompt for VS 2019”
    d. Change directory to:
    C:\Users\pbagwell\repos\GitHub\tensorflow

e. Rebuild the DLL by calling the following command:
link.exe /nologo /DLL /SUBSYSTEM:CONSOLE -defaultlib:advapi32.lib -DEFAULTLIB:advapi32.lib -ignore:4221 /FORCE:MULTIPLE /MACHINE:X64 @bazel-out/x64_windows-opt/bin/tensorflow/tensorflow_cc.dll-2.params /OPT:ICF /OPT:REF /DEF:bazel-out/x64_windows-opt/bin/tensorflow/tensorflow_filtered_def_file.def /ignore:4070
17. To change the name of the output library, for example to tensorflow_cc_gpu.dll, modify the file, “C:\Users\pbagwell\repos\GitHub\tensorflow_gpu\bazel-out\x64_windows-opt\bin\tensorflow\tensorflow_cc.dll-2.params”.

From:
/OUT:bazel-out/x64_windows-opt/bin/tensorflow/tensorflow_cc.dll
/IMPLIB:bazel-out/x64_windows-opt/bin/tensorflow/tensorflow_cc.dll.if.lib

To:
/OUT:bazel-out/x64_windows-opt/bin/tensorflow/tensorflow_cc_gpu.dll
/IMPLIB:bazel-out/x64_windows-opt/bin/tensorflow/tensorflow_cc_gpu.dll.if.lib

Then relink as in step 15e.
References:
Example project with TF_EXPORT:
Tensorflow 2.x C++ API for object detection (inference)
Ref1. 3 dogs object detection: https://medium.com/@reachraktim/using-the-new-tensorflow-2-x-c-api-for-object-detection-inference-ad4b7fd5fecc

TF_EXPORT
Ref2. https://medium.com/vitrox-publication/deep-learning-frameworks-tensorflow-build-from-source-on-windows-python-c-cpu-gpu-d3aa4d0772d8
Ref3. https://ashley-tharp.medium.com/btw-if-you-enjoy-my-tutorial-i-always-appreciate-endorsements-on-my-linkedin-https-www-linkedin-a6d6fcba1e44

*Note: clean bazel between cpu and gpu builds by calling:
bazel clean –expunge


I got TensorFlow 2.8 built on the Windows 11 Alienware (CPU and GPU). When I went to the nVidia site to get the CUDA SDK for Windows 11, CUDA SDK 11.2 was not listed. The earliest version for W11 was 11.4.3, which was not listed in TensorFlow as “tested”. I went ahead and installed the latest CUDA SDK 11.6.1 (which includes the latest display driver 511.65) and the latest cuDNN 8.3.3, and so far, it works for the 3 dogs example. I updated the document “Techmah CMF\Product Development - General\Engineering\SW\TensorFlow\How_to_Build_TensorFlow2.8_2022_03_25.docx”.

Paul Bagwell

Hi,
I followed the steps you provided but still got the following error. I’ve been stuck on this for the past few days. Please help me find a solution.

Thank you,
Shreyas Misra

Installed versions:
TensorFlow source, git branch r2.8
Windows 11
Cuda Toolkit 11.6.1
cuDNN 8.3.3
Bazel 4.2.1
Python 3.9.10
MSYS2 64bit
Visual Studio Community 2019
Nvidia RTX 1650Ti, compute capability 7.5

Here are the errors:

ERROR: An error occurred during the fetch of repository 'local_config_cuda':
   Traceback (most recent call last):
        File "C:/users/shrey/source/repos/tensorflow/third_party/gpus/cuda_configure.bzl", line 1401, column 38, in _cuda_autoconf_impl
                _create_local_cuda_repository(repository_ctx)
        File "C:/users/shrey/source/repos/tensorflow/third_party/gpus/cuda_configure.bzl", line 1239, column 56, in _create_local_cuda_repository
                host_compiler_includes + _cuda_include_path(
        File "C:/users/shrey/source/repos/tensorflow/third_party/gpus/cuda_configure.bzl", line 364, column 32, in _cuda_include_path
                inc_entries.append(realpath(repository_ctx, cuda_config.cuda_toolkit_path + "/include"))
        File "C:/users/shrey/source/repos/tensorflow/third_party/remote_config/common.bzl", line 290, column 19, in realpath
                return execute(repository_ctx, [bash_bin, "-c", "realpath \"%s\"" % path]).stdout.strip()
        File "C:/users/shrey/source/repos/tensorflow/third_party/remote_config/common.bzl", line 230, column 13, in execute
                fail(
Error in fail: Repository command failed

Bazel build options:

--action_env PYTHON_BIN_PATH=C:/Users/shrey/AppData/Local/Programs/Python/Python39/python.exe 
--action_env PYTHON_LIB_PATH=C:/Users/shrey/AppData/Local/Programs/Python/Python39/lib/site-packages 
--python_path=C:/Users/shrey/AppData/Local/Programs/Python/Python39/python.exe 
--action_env CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6 
--action_env CUDNN_INSTALL_PATH=C:/Program Files/NVIDIA/CUDNN/v8.3 
--action_env TF_CUDA_COMPUTE_CAPABILITIES=7.5
--config=cuda --copt=/d2ReducedOptimizeHugeFunctions 
--host_copt=/d2ReducedOptimizeHugeFunctions 
--define=override_eigen_strong_inline=true

I also tried following the tagged references but still got the errors. Any help is greatly appreciated.