Comments (4)
from shark.
Here is what my system looks like in Adrenalin:
Vulkan Driver is 2.0.294
from shark.
I have the same problem.
shark_tank local cache is located at C:\Users\andre\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
gradio temporary image cache located at A:\A.I. Sidehustle\shark-nod-ai\shark_tmp/gradio. You may change this by setting the GRADIO_TEMP_DIR environment variable.
No temporary images files to clear.
vulkan devices are available.
metal devices are not available.
cuda devices are not available.
rocm devices are not available.
local-sync devices are available.
shark_tank local cache is located at C:\Users\andre\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
local-task devices are available.
shark_tank local cache is located at C:\Users\andre\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
Running on local URL: http://0.0.0.0:8080
shark_tank local cache is located at C:\Users\andre\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
To create a public link, set `share=True` in `launch()`.
Found device AMD Radeon(TM) RX Vega 11 Graphics. Using target triple rdna2-unknown-windows.
Using tuned models for stabilityai/stable-diffusion-2-1-base(fp16) on device vulkan://29000000-0000-0000-0000-000000000000.
saving euler_scale_model_input_1_512_512_vulkan_fp16_torch_linalg.mlir to C:\Users\andre\AppData\Local\Temp
loading existing vmfb from: A:\A.I. Sidehustle\shark-nod-ai\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb
WARNING: [Loader Message] Code 0 : ReadDataFilesInRegistry: Registry lookup failed to get layer manifest files.
Loading module A:\A.I. Sidehustle\shark-nod-ai\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
saving euler_step_1_512_512_vulkan_fp16_torch_linalg.mlir to C:\Users\andre\AppData\Local\Temp
loading existing vmfb from: A:\A.I. Sidehustle\shark-nod-ai\euler_step_1_512_512_vulkan_fp16.vmfb
Loading module A:\A.I. Sidehustle\shark-nod-ai\euler_step_1_512_512_vulkan_fp16.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
use_tuned? sharkify: True
_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base
Loading module A:\A.I. Sidehustle\shark-nod-ai\clip_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan.vmfb...
Compiling Vulkan shaders. This may take a few minutes.
torch\fx\node.py:263: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph.
warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.")
Loading Winograd config file from C:\Users\andre\.local/shark_tank/configs\unet_winograd_vulkan.json
100%|█████████████████████████████████████████████████████████████████████████████████| 107/107 [00:00<00:00, 1.76kB/s]
Retrying with a different base model configuration
mat1 and mat2 shapes cannot be multiplied (128x768 and 1024x320)
Retrying with a different base model configuration
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 64] to have 4 channels, but got 9 channels instead
Retrying with a different base model configuration
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 9, 64, 64] to have 4 channels, but got 9 channels instead
Retrying with a different base model configuration
Given groups=1, weight of size [320, 4, 3, 3], expected input[4, 7, 512, 512] to have 4 channels, but got 7 channels instead
Retrying with a different base model configuration
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "asyncio\runners.py", line 190, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 640, in run_until_complete
File "asyncio\windows_events.py", line 321, in run_forever
File "asyncio\base_events.py", line 607, in run_forever
File "asyncio\base_events.py", line 1922, in _run_once
File "asyncio\events.py", line 80, in _run
File "gradio\queueing.py", line 431, in process_events
File "gradio\queueing.py", line 388, in call_prediction
File "gradio\route_utils.py", line 219, in call_process_api
File "gradio\blocks.py", line 1437, in process_api
File "gradio\blocks.py", line 1123, in call_function
File "gradio\utils.py", line 503, in async_iteration
File "gradio\utils.py", line 496, in __anext__
File "anyio\to_thread.py", line 33, in run_sync
File "anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
File "anyio\_backends\_asyncio.py", line 807, in run
File "gradio\utils.py", line 479, in run_sync_iterator_async
File "gradio\utils.py", line 629, in gen_wrapper
File "ui\txt2img_ui.py", line 195, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 134, in generate_images
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 235, in produce_img_latents
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 114, in load_unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 858, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 853, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 63, in check_compilation
SystemExit: Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "uvicorn\protocols\websockets\websockets_impl.py", line 247, in run_asgi
File "uvicorn\middleware\proxy_headers.py", line 84, in __call__
File "fastapi\applications.py", line 292, in __call__
File "starlette\applications.py", line 122, in __call__
File "starlette\middleware\errors.py", line 149, in __call__
File "starlette\middleware\cors.py", line 75, in __call__
File "starlette\middleware\exceptions.py", line 68, in __call__
File "fastapi\middleware\asyncexitstack.py", line 17, in __call__
File "starlette\routing.py", line 718, in __call__
File "starlette\routing.py", line 341, in handle
File "starlette\routing.py", line 82, in app
File "fastapi\routing.py", line 324, in app
File "gradio\routes.py", line 578, in join_queue
File "asyncio\tasks.py", line 639, in sleep
asyncio.exceptions.CancelledError
ERROR: Traceback (most recent call last):
File "asyncio\runners.py", line 190, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 640, in run_until_complete
File "asyncio\windows_events.py", line 321, in run_forever
File "asyncio\base_events.py", line 607, in run_forever
File "asyncio\base_events.py", line 1922, in _run_once
File "asyncio\events.py", line 80, in _run
File "gradio\queueing.py", line 431, in process_events
File "gradio\queueing.py", line 388, in call_prediction
File "gradio\route_utils.py", line 219, in call_process_api
File "gradio\blocks.py", line 1437, in process_api
File "gradio\blocks.py", line 1123, in call_function
File "gradio\utils.py", line 503, in async_iteration
File "gradio\utils.py", line 496, in __anext__
File "anyio\to_thread.py", line 33, in run_sync
File "anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
File "anyio\_backends\_asyncio.py", line 807, in run
File "gradio\utils.py", line 479, in run_sync_iterator_async
File "gradio\utils.py", line 629, in gen_wrapper
File "ui\txt2img_ui.py", line 195, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 134, in generate_images
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 235, in produce_img_latents
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 114, in load_unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 858, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 853, in unet
File "apps\stable_diffusion\src\models\model_wrappers.py", line 63, in check_compilation
**SystemExit: Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues**
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "starlette\routing.py", line 686, in lifespan
File "uvicorn\lifespan\on.py", line 137, in receive
File "asyncio\queues.py", line 158, in get
asyncio.exceptions.CancelledError
from shark.
Fixed: I was using a version of SHARK that was too old. Went from nod.ai SHARK 20231115.1026 to nod.ai SHARK 20240126.1139
Same problem here using release nod.ai SHARK 20231115.1026 (nodai_shark_studio_20231115_1026.exe
)
from shark.
Related Issues (20)
- AMD Rocm windows does not work - hipErrorSharedObjectInitFailed HOT 1
- SDXL Support for 2.0
- Custom model support HOT 1
- Documentation for 2.0
- File system cleanup for 2.0 HOT 1
- Custom LORA support
- Llama2 in 2.0
- Stuck at "Compiling Vulkan shaders" (or takes forever?) HOT 2
- How to change upscaler model?
- how to export tosa/linalg, when conv input and weight are torch.int8, bias is torch.int32
- Wrong SDXL1.0 HuggingFace Repository
- Error creating vm context with modules... on RX 580
- Device AMD Radeon RX 7800 XT => rocm not recognized. HOT 2
- Gradio version 3.44.3 and ASGI exception
- Error Device Unknown, AMD GPU rocm not recognized on a AMD RX7600XT HOT 1
- Shark API Endpoints, has there been any change?
- Error while running ./setup_venv.sh.
- help please
- IREE import issue
- 7900XTX and Vulkan doesn't work
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from shark.