Giter Site home page Giter Site logo

jax-windows-builder's People

Contributors

angelicknight avatar cloudhan avatar gboehl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jax-windows-builder's Issues

Cudnn error? I don't know why

Attempting to switch between multiple versions of cudnn will result in the following error, it is unclear what caused it. Can you help me?

[libprotobuf ERROR external/com_google_protobuf/src/google/protobuf/text_format.cc:337] Error parsing text-format xla.gpu.DeviceHloInstructionProfiles: 1463:15: Message type "xla.gpu.HloInstructionProfile" has no field named "entries".
2024-01-14 03:06:41.613349: F external/xla/xla/service/gpu/gpu_hlo_cost_analysis.cc:289] Check failed: tsl::protobuf::TextFormat::ParseFromString( std::string(kDeviceHloOpProfiles), &all_device_profiles)

ValueError: DenseElementsAttr could not be constructed from the given buffer. This may mean that the Python buffer layout does not match that MLIR expected layout and is a bug.

A Stack Overflow user reported a problem with these JAX builds of the error in the title when running the following code:

import jax.numpy as jnp
a = jnp.array([[1, 2], [3, 5]])
b = jnp.array([1, 2])
x = jnp.linalg.solve(a, b)
print(x)

This appears to happen in versions 0.3.7 and 0.3.5 but not 0.3.2. I can't check at the moment but believe similar errors also happened when trying to generate any random numbers on these versions.

how did you resolve "The specified path is too long" in windows?

ERROR: C://fgk4kwmh/external/xla/xla/mlir_hlo/BUILD:1481:11: Compiling xla/mlir_hlo/mhlo/analysis/shape_component_analysis.cc failed: (Exit 1): msvc_wrapper_for_nvcc.bat failed: error executing command
cd /d C:/
/fgk4kwmh/execroot/main
SET CUDA_TOOLKIT_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8
SET CUDNN_INSTALL_PATH=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8
SET INCLUDE=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\include;C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\ATLMFC\include;C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Auxiliary\VS\include;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\ucrt;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\um;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\shared;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\winrt;C:\Program Files (x86)\Windows Kits\10\include\10.0.22621.0\cppwinrt;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um
SET LIB=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\ATLMFC\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\lib\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.22621.0\um\x64
SET PATH=C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\HostX64\x64;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\VC\VCPackages;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\CommonExtensions\Microsoft\TestWindow;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer;C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Current\bin\Roslyn;C:\Program Files\Microsoft Visual Studio\2022\Professional\Team Tools\Performance Tools\x64;C:\Program Files\Microsoft Visual Studio\2022\Professional\Team Tools\Performance Tools;C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\x64;C:\Program Files (x86)\HTML Help Workshop;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\CommonExtensions\Microsoft\FSharp\Tools;C:\Program Files (x86)\Windows Kits\10\bin\10.0.22621.0\x64;C:\Program Files (x86)\Windows Kits\10\bin\x64;C:\Program Files\Microsoft Visual Studio\2022\Professional\MSBuild\Current\Bin\amd64;C:\Windows\Microsoft.NET\Framework64\v4.0.30319;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\Tools;;C:\Windows\system32;C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\Llvm\x64\bin;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\CommonExtensions\Microsoft\CMake\CMake\bin;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\CommonExtensions\Microsoft\CMake\Ninja;C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\VC\Linux\bin\ConnectionManagerExe
SET PWD=/proc/self/cwd
SET RUNFILES_MANIFEST_ONLY=1
SET TEMP=C:\Users\admin\AppData\Local\Temp
SET TF_CUDA_COMPUTE_CAPABILITIES=sm_52,sm_60,sm_70,compute_80
SET TF_CUDA_PATHS=C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.8
SET TF_CUDA_VERSION=11.8
SET TF_CUDNN_VERSION=8.9.5
SET TMP=C:\Users\admin\AppData\Local\Temp
external\local_config_cuda\crosstool\windows\msvc_wrapper_for_nvcc.bat /nologo /DCOMPILER_MSVC /DNOMINMAX /D_WIN32_WINNT=0x0600 /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS /bigobj /Zm500 /J /Gy /GF /EHsc /wd4351 /wd4291 /wd4250 /wd4996 /Iexternal/xla /Ibazel-out/x64_windows-opt/bin/external/xla /Iexternal/llvm-project /Ibazel-out/x64_windows-opt/bin/external/llvm-project /Iexternal/stablehlo /Ibazel-out/x64_windows-opt/bin/external/stablehlo /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/shape_component_analysis /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/mlir_hlo /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/canonicalize_inc_gen /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/convert_op_folder /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinAttributeInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinDialectBytecodeGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinDialectIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinLocationAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinTypeInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BuiltinTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/CallOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/CastOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/FunctionInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/InferTypeOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/OpAsmInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/RegionKindInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SideEffectInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SymbolInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TensorEncodingIncGen /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/hlo_ops_attrs_inc_gen /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/hlo_ops_common /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/hlo_ops_enums_inc_gen /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/hlo_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/hlo_ops_pattern_inc_gen /Ibazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_virtual_includes/hlo_ops_typedefs_inc_gen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithBaseIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithCanonicalizationIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ArithOpsInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/BytecodeOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/InferIntRangeInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/VectorInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ControlFlowInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LoopLikeInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AsmParserTokenKinds /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/DialectUtilsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ViewLikeInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ComplexAttributesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ComplexBaseIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ComplexOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ControlFlowOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/FuncIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMDialectInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMIntrinsicOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/LLVMTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MemorySlotInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/CopyOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MemRefBaseIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MemRefOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ShapedOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/DestinationStyleOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ValueBoundsOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/QuantOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/PDLOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/PDLTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/PDLInterpOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ConversionPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TransformsPassIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/MLIRShapeCanonicalizationIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ShapeOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AffineMemoryOpInterfacesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/AffineOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/ParallelCombiningOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TensorOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/TilingInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SparseTensorAttrDefsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SparseTensorOpsIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/SparseTensorTypesIncGen /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/_virtual_includes/RuntimeVerifiableOpInterfaceIncGen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/base /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/base_attr_interfaces_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/broadcast_utils /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/chlo_ops /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/chlo_attrs_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/chlo_enums_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/chlo_ops_inc_gen /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/stablehlo_type_inference /Ibazel-out/x64_windows-opt/bin/external/stablehlo/_virtual_includes/stablehlo_assembly_format /Iexternal/llvm-project/llvm/include /Ibazel-out/x64_windows-opt/bin/external/llvm-project/llvm/include /Iexternal/llvm-project/mlir/include /Ibazel-out/x64_windows-opt/bin/external/llvm-project/mlir/include /D_CRT_SECURE_NO_DEPRECATE /D_CRT_SECURE_NO_WARNINGS /D_CRT_NONSTDC_NO_DEPRECATE /D_CRT_NONSTDC_NO_WARNINGS /D_SCL_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_WARNINGS /DUNICODE /D_UNICODE /DLTDL_SHLIB_EXT=".dll" /DLLVM_PLUGIN_EXT=".dll" /DLLVM_NATIVE_ARCH="X86" /DLLVM_NATIVE_ASMPARSER=LLVMInitializeX86AsmParser /DLLVM_NATIVE_ASMPRINTER=LLVMInitializeX86AsmPrinter /DLLVM_NATIVE_DISASSEMBLER=LLVMInitializeX86Disassembler /DLLVM_NATIVE_TARGET=LLVMInitializeX86Target /DLLVM_NATIVE_TARGETINFO=LLVMInitializeX86TargetInfo /DLLVM_NATIVE_TARGETMC=LLVMInitializeX86TargetMC /DLLVM_NATIVE_TARGETMCA=LLVMInitializeX86TargetMCA /DLLVM_HOST_TRIPLE="x86_64-pc-win32" /DLLVM_DEFAULT_TARGET_TRIPLE="x86_64-pc-win32" /DLLVM_VERSION_MAJOR=17 /DLLVM_VERSION_MINOR=0 /DLLVM_VERSION_PATCH=0 /DLLVM_VERSION_STRING="17.0.0git" /D__STDC_LIMIT_MACROS /D__STDC_CONSTANT_MACROS /D__STDC_FORMAT_MACROS /DBLAKE3_USE_NEON=0 /DBLAKE3_NO_AVX2 /DBLAKE3_NO_AVX512 /DBLAKE3_NO_SSE2 /DBLAKE3_NO_SSE41 /showIncludes /MD /O2 /DNDEBUG /D_USE_MATH_DEFINES -DWIN32_LEAN_AND_MEAN -DNOGDI /Zc:preprocessor -DMLIR_PYTHON_PACKAGE_PREFIX=jaxlib.mlir. /std:c++17 /Fobazel-out/x64_windows-opt/bin/external/xla/xla/mlir_hlo/_objs/shape_component_analysis/shape_component_analysis.obj /c external/xla/xla/mlir_hlo/mhlo/analysis/shape_component_analysis.cc

Configuration: 9d9415326b9c9b7edbabfb78616f5aa03e769d64aed4e7f506357f4f760ed42c

Execution platform: @local_execution_config_platform//:platform

The specified path is too long.
Target //build:build_wheel failed to build

jaxlib 0.3.7

This project is awesome. Any chance of a 0.3.7 release πŸ™ ☺️

XLA fails not finding DNN, even though only using the CPU-only build

HI! Thanks a lot for providing these builds!

I've installed jax on my Windows 10 via pip install "jax[cpu]===0.3.14" -f https://whls.blob.core.windows.net/unstable/index.html --use-deprecated legacy-resolver and then lightweightMMM via pip install lightweight_mmm. Then when I try to run the simple example notebook, the utils.simulate_dummy_data() call fails because XLA can't find DNN.

Why would this be an issue, if I'm running a CPU-only build?

Many thanks for any hints you can give,
E.

XlaRuntimeError: UNKNOWN: Failed to determine best cudnn convolution algorithm for:
%cudnn-conv = (f32[1,3,140]{2,1,0}, u8[0]{0}) custom-call(f32[1,3,140]{2,1,0} %bitcast.1, f32[3,1,25]{2,1,0} %bitcast.2), window={size=25 pad=12_12}, dim_labels=bf0_oi0->bf0, feature_group_count=3, custom_call_target="__cudnn$convForward", metadata={op_name="jit(carryover)/jit(main)/conv_general_dilated[window_strides=(1,) padding=((12, 12),) lhs_dilation=(1,) rhs_dilation=(1,) dimension_numbers=ConvDimensionNumbers(lhs_spec=(0, 1, 2), rhs_spec=(0, 1, 2), out_spec=(0, 1, 2)) feature_group_count=3 batch_group_count=1 lhs_shape=(1, 3, 140) rhs_shape=(3, 1, 25) precision=None preferred_element_type=None]" source_file="C:\\Users\\egor.kraev\\.conda\\envs\\py38_jax\\lib\\site-packages\\lightweight_mmm\\media_transforms.py" source_line=133}, backend_config="{\"conv_result_scale\":1,\"activation_mode\":\"0\",\"side_input_scale\":0}"

Original error: UNIMPLEMENTED: DNN library is not found.

Update to jaxlib 0.4.1 ?

This is a major update to JAX. Would be great to get this building with Windows. I have been struggling to manage to get to build myself locally.

building lower versions of jax gives "Problem getting python include path."

I am trying to build jaxlib 0.1.64 which is required by Jax 0.2.12 for GPT-J
It cannot build. The detailed error messages are:
ERROR: An error occurred during the fetch of repository 'local_execution_config_python': Traceback (most recent call last): File "C:/users/runneradmin/bzl_out/ktt4c75d/external/org_tensorflow/third_party/py/python_configure.bzl", line 212, column 41, in _create_local_python_repository python_include = _get_python_include(repository_ctx, python_bin) File "C:/users/runneradmin/bzl_out/ktt4c75d/external/org_tensorflow/third_party/py/python_configure.bzl", line 152, column 21, in _get_python_include result = execute( File "C:/users/runneradmin/bzl_out/ktt4c75d/external/org_tensorflow/third_party/remote_config/common.bzl", line 219, column 13, in execute fail( Error in fail: Problem getting python include path.

Any idea?

Encode cu82 in whl name

For example:

jaxlib-0.3.2+cuda111-cp39-none-win_amd64.whl should be renamed to jaxlib-0.3.2+cuda11.cudnn82-cp39-none-win_amd64.whl

  • update build script to generate new package with new naming format.
  • rename all CUDA packages after 0.1.72 to have the new naming format.

Seems to install okay but python says jaxlib is not installed after

I've tried a few different cuda WHL's of Jax, and pip says it successfully installs them. But when I try to run a script using the pipeline module it fails saying jaxlib is not installed. Is jaxlib a seperate module? If I try to pip install jaxlib it just tries to install jax (which fails, which is why I am using the jax-windows-builder whl link using this command:
pip install jax[cuda111] -f https://whls.blob.core.windows.net/unstable/index.html --use-deprecated legacy-resolver

Console output from virtual environment

(virtual_env) D:\Projects\python_virtv\virtual_env\Scripts>pip install jax[cuda111] -f https://whls.blob.core.windows.net/unstable/index.html --use-deprecated legacy-resolver
Looking in links: https://whls.blob.core.windows.net/unstable/index.html
Processing c:\users\erispe\appdata\local\pip\cache\wheels\4a\ca\ed\ae451e0a70bc64df2bc46d2253c210302b7b517aeeb72e4755\jax-0.4.1-py3-none-any.whl
  WARNING: jax 0.4.1 does not provide the extra 'cuda111'
Requirement already satisfied: opt-einsum in d:\projects\python_virtv\virtual_env\lib\site-packages (from jax[cuda111]) (3.3.0)
Requirement already satisfied: numpy>=1.20 in d:\projects\python_virtv\virtual_env\lib\site-packages (from jax[cuda111]) (1.24.1)
Requirement already satisfied: scipy>=1.5 in d:\projects\python_virtv\virtual_env\lib\site-packages (from jax[cuda111]) (1.10.0)
Installing collected packages: jax
Successfully installed jax-0.4.1

Then when I run the script using a pipeline model, in this case gpt-jp, I get this

(virtual_env) D:\Projects\python_virtv\virtual_env\Scripts>python ../gtpjp_quickstart.py
Traceback (most recent call last):
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\jax\_src\lib\__init__.py", line 25, in <module>
    import jaxlib as jaxlib
ModuleNotFoundError: No module named 'jaxlib'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:\Projects\python_virtv\virtual_env\gtpjp_quickstart.py", line 2, in <module>
    from transformers import pipeline
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\transformers\__init__.py", line 30, in <module>
    from . import dependency_versions_check
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module>
    from .utils.versions import require_version, require_version_core
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\transformers\utils\__init__.py", line 34, in <module>
    from .generic import (
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\transformers\utils\generic.py", line 36, in <module>
    import jax.numpy as jnp
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\jax\__init__.py", line 35, in <module>
    from jax import config as _config_module
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\jax\config.py", line 17, in <module>
    from jax._src.config import config  # noqa: F401
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\jax\_src\config.py", line 28, in <module>
    from jax._src import lib
  File "D:\Projects\python_virtv\virtual_env\lib\site-packages\jax\_src\lib\__init__.py", line 27, in <module>
    raise ModuleNotFoundError(
ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.

Fixing Build Question From #14466 Post

I have been working through your excellent post google/jax#14466 , and got to this part.

Cause of already defined xla::runtime::ffi::GetXlaFfiStream xla::runtime::ffi::GetXlaFfiStream is first defined as weak symbol in ffi.cc
then defined as normal symbol in executable.cc and msvc does not support weak symbol.

As for dirty fix, removing the definition in ffi.cc and replacing it with decl will allow the linking to pass.

I understand the weak symbol issue, but could you clarify what exactly what changes you made when you state by "replace it with a decl".

Could not load dynamic library 'cudart64_110.dll'

I've installed this build, but I keep getting the "Could not load dynamic library 'cudart64_110.dll'" error even though I clearly see the dll in a directory on the PATH.

My configuration:

Windows 11 Home 21H2
Python: 3.10.7
CUDA: 11.1.0 (for Win10, there's no dedicated build for Win11)
CuDNN: cudnn-11.3-windows-x64-v8.2.1.32
jaxlib: jaxlib-0.3.14+cuda11.cudnn82-cp310-none-win_amd64.whl 
jax: 0.3.14

CUDA + CuDNN are installed in c:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin which is on the PATH.

Full error:

$ python
Python 3.10.7 (tags/v3.10.7:6cc6b13, Sep  5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import jax
2022-09-18 11:52:21.036734: W external/org_tensorflow/tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-09-18 11:52:21.058690: W external/org_tensorflow/tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found

Q: Installing CPU only version

@cloudhan ,
Really Great work on the jax windows builder,

While Installing jaxlib using:
pip install jaxlib -f https://whls.blob.core.windows.net/unstable/index.html --use-deprecated legacy-resolver

This always the install the cuda version (https://whls.blob.core.windows.net/unstable/cuda111/jaxlib-0.3.5+cuda11.cudnn82-cp39-none-win_amd64.whl)

How do I make it install only the CPU version without direct link usage like
pip install https://whls.blob.core.windows.net/unstable/cpu/jaxlib-0.3.5-cp39-none-win_amd64.whl

installation is not working

Hi there,

First of all, thank you for supporting windows! I've used this build before with great success. However, at the moment it's not working, nor can I find a way to get an older version to work.

I'm trying to set up jax on a windows PC with conda, but the provided instructions do not work anymore. I also can't really get any other version to work.

I'm installing on a laptop, this is the output from nvidia-smi:

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 527.83       Driver Version: 527.83       CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ... WDDM  | 00000000:01:00.0 Off |                  N/A |
| N/A   50C    P0     9W /  30W |      0MiB /  2048MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

I tried:

conda create -n jaxtest python
conda activate jax_test
# install it
pip install jax[cuda111] -f https://whls.blob.core.windows.net/unstable/index.html --use-deprecated legacy-resolver
# installs numpy, etc.. 
# raises a warning:
#   WARNING: jax 0.4.19 does not provide the extra 'cuda111'

python -m jax
#  File "C:\ProgramData\Anaconda3\envs\jax_test\Lib\site-packages\jax\_src\lib\__init__.py", line 27, in <module>
#    raise ModuleNotFoundError(
# ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.

This might obviously not work for cuda 12.0. However, If i run it with

pip install jax[pip_cuda12] -f https://whls.blob.core.windows.net/unstable/index.html --use-deprecated legacy-resolver
I get the same result.

I also tried this for python==3.11, python==3.10 or python==3.9. Same result.

When I just download a jaxlib it also does not work, sometimes I get a bit further but no computations can be done and I run into 'AttributeError: module 'ml_dtypes' has no attribute 'float8_e4m3b11''.

What should the python version be? And what would be the right command?

"ptxas returned an error during compilation of ptx to sass"

I've just installed the jax + jaxlib from jaxlib-0.3.25+cuda11.cudnn82-cp310-cp310-win_amd64.whl and I'm getting the following error:

Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import jax.numpy as jnp
>>> a = jnp.array([1, 2, 3])
>>> a
DeviceArray([1, 2, 3], dtype=int32)
>>> a + a
2022-12-18 15:28:18.688396: F external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:453] ptxas returned an error during compilation of ptx to sass: 'INTERNAL: ptxas exited with non-zero error code -1, output: '  If the error message indicates that a file could not be written, please verify that sufficient filesystem space is provided.

ptxas version:

$ ptxas --version
ptxas: NVIDIA (R) Ptx optimizing assembler
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:11:24_Pacific_Daylight_Time_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.relgpu_drvr455TC455_06.29069683_0

"Cannot find valid initial parameters. Please check your model again." error once data size exceeds 3500 row count

We installed the Jaxlib package following the the instructions in readme file.

I am using Jaxlib and lightweight_mmm library to perform Bayesian approach to marketing mix modelling. The package is working properly while building a model but, when we are trying the same with a larger dataset it is throwing the following error.

RuntimeError: Cannot find valid initial parameters. Please check your model again.

Please let me know, incase of any additional information needed.

Great Job on Update

Not an issue, I just wanted to leave a message to say how much I appreciate the work involved in getting Jaxlib building on Windows again.

Is it possible to build a wheel for cuDNN v8.8?

I'm trying to find a method of installing JAX v4.11 from here.

Q. Would it be possible to build a wheel for cudnn v8.8?

The reason? Unfortunately, there are no Anaconda builds for cuDNN v8.6 or v8.9; the best one I could find was cuDNN v8.8, see:
https://anaconda.org/conda-forge/cudnn/files

Appendix A

Here is how I installed JAX v0.3.25 on Windows + Anaconda. It is a completely self-contained method that does not rely on any external Windows installers from nVIDIA.

BTW, I could create a pull request with these extra docs if it would help others?

# Install Anaconda or Miniconda
conda create -n py310jax python=3.10 -y
conda activate py310jax
conda install -c conda-forge cudatoolkit=11.1 cudnn -y
# Tensorflow 2.10 was the last version to support CUDA+GPU on Windows.
pip install "tensorflow<2.11"
# Install jaxlib
#   - Download file "jaxlib-0.3.25+cuda11.cudnn82-cp310-cp310-win_amd64.whl" from "https://whls.blob.core.windows.net/unstable/index.html"
pip install jaxlib-0.3.25+cuda11.cudnn82-cp310-cp310-win_amd64.whl
# Install matching version of jax
pip install jax==0.3.25
# Now we can run JAX-based Python code on Windows.

WARNING: jax 0.3.4 does not provide the extra 'cuda111'

Hello, I'm running the following command as described in your README

pip install jax[cuda111] -f https://whls.blob.core.windows.net/unstable/index.html --use-deprecated legacy-resolver

but I get a warning saying that jax does not provide the extra 'cuda111'

Full command output:

Looking in links: https://whls.blob.core.windows.net/unstable/index.html
Processing c:\users\myname\appdata\local\pip\cache\wheels\09\88\75\b38c1c9382c2a6e9d3ab993f7b\jax-0.3.4-py3-none-any.whl
  WARNING: jax 0.3.4 does not provide the extra 'cuda111'
Requirement already satisfied: absl-py in ... (from jax[cuda111]) (1.0.0)
Requirement already satisfied: numpy>=1.19 in ... (from jax[cuda111]) (1.21.5)
Requirement already satisfied: typing-extensions in ... (from jax[cuda111]) (4.1.1)
Requirement already satisfied: scipy>=1.2.1 in ... (from jax[cuda111]) (1.8.0)
Requirement already satisfied: opt-einsum in ... (from jax[cuda111]) (3.3.0)
Requirement already satisfied: six in ... (from absl-py->jax[cuda111]) (1.16.0)
Installing collected packages: jax
Successfully installed jax-0.3.4

Am I doing something wrong?

CPU only builds?

Any chance you could add CPU only builds from this PR:
google/jax#15009

That would at least allow many jax dependent libraries from providing support (e.g. clients and inference etc) even if its slow.

Python 3.11

The main repo is doing 3.11 releases now so it would be awesome to get some 3.11 releases possibly even for some retrospective versions as well.

Error with CUDNN 8.9.2

Hi,
I just installed [cuda/jaxlib-0.4.11+cuda.cudnn89-cp311-cp311-win_amd64.whl](https://whls.blob.core.windows.net/unstable/cuda/jaxlib-0.4.11+cuda.cudnn89-cp311-cp311-win_amd64.whl). I have CUDA 11.7 and CUDNN 8.9.2. When I run this:

import jax
from jax.lib import xla_bridge
print(xla_bridge.get_backend().platform)
jax.numpy.array(1.0)

I get the correct print output but then an error:

gpu
2023-07-03 13:06:41.867294: E external/xla/xla/stream_executor/cuda/cuda_dnn.cc:407] There was an error before creating cudnn handle (302): cudaGetErrorName symbol not found. : cudaGetErrorString symbol not found.
Traceback (most recent call last):
File "S:\dev\tapnet\projects\test.py", line 10, in <module>
jax.numpy.array(1.0)
File "S:\dev\tapnet\Lib\site-packages\jax\_src\numpy\lax_numpy.py", line 2051, in array
out_array: Array = lax_internal._convert_element_type(out, dtype, weak_type=weak_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\lax\lax.py", line 549, in _convert_element_type
return convert_element_type_p.bind(operand, new_dtype=new_dtype,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\core.py", line 380, in bind
return self.bind_with_trace(find_top_trace(args), args, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\core.py", line 383, in bind_with_trace
out = trace.process_primitive(self, map(trace.full_raise, args), params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\core.py", line 815, in process_primitive
return primitive.impl(*tracers, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\dispatch.py", line 132, in apply_primitive
compiled_fun = xla_primitive_callable(
^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\util.py", line 284, in wrapper
return cached(config._trace_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\util.py", line 277, in cached
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\dispatch.py", line 223, in xla_primitive_callable
compiled = _xla_callable_uncached(
^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\dispatch.py", line 253, in _xla_callable_uncached
return computation.compile().unsafe_call
^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\interpreters\pxla.py", line 2323, in compile
executable = UnloadedMeshExecutable.from_hlo(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\interpreters\pxla.py", line 2645, in from_hlo
xla_executable, compile_options = _cached_compilation(
^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\interpreters\pxla.py", line 2555, in _cached_compilation
xla_executable = dispatch.compile_or_get_cached(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\dispatch.py", line 497, in compile_or_get_cached
return backend_compile(backend, computation, compile_options,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\profiler.py", line 314, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "S:\dev\tapnet\Lib\site-packages\jax\_src\dispatch.py", line 465, in backend_compile
return backend.compile(built_c, compile_options=options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
jaxlib.xla_extension.XlaRuntimeError: FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details.

Any idea where should I start looking into?
Thanks!

Builds for Python 3.10

Thank you very much for your work! Are you planning to provide builds for Python 3.10 anytime soon?

TF Compression and TensorFlow Federated

Now that these jaxlib wheels exist, the only compiled dependency for windows that prevents TFF from running is tensorflow-compression.

https://github.com/tensorflow/compression

If we can get the requirements for jax and jaxlib relaxed a little bit here:
google-parfait/tensorflow-federated#3242

I imagine TF compression is no where as complicated as jaxlib + cuda etc to get running in a fork and provide wheels.

Is this something you would be interested in some help with?

Fatal error

I can't seem to get around this error. What am I doing wrong?

git : fatal: not a git repository (or any of the parent directories): .git
At C:\jax-windows-builder\bazel-build-cpu.ps1:45 char:5

  • git checkout .bazelrc
    
  • ~~~~~~~~~~~~~~~~~~~~~
    
    • CategoryInfo : NotSpecified: (fatal: not a gi...ectories): .git:String) [], RemoteException
    • FullyQualifiedErrorId : NativeCommandError

CUDA versions of Jaxlib 0.3.x?

There doesn't seem to be any CUDA versions of jaxlib 0.3.x.

Looking at your Github "actions", it looks like the build failed, with some complaints about lack of disk space.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.