Giter Site home page Giter Site logo

radames / real-time-latent-consistency-model Goto Github PK

View Code? Open in Web Editor NEW
847.0 19.0 100.0 313 KB

App showcasing multiple real-time diffusion models pipelines with Diffusers

Home Page: https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model

License: Apache License 2.0

Dockerfile 0.50% Python 86.90% HTML 0.12% Shell 0.19% JavaScript 0.42% CSS 0.02% Svelte 8.07% TypeScript 3.78%
diffusers latent-consistency-model machine-learning mjpeg mjpeg-stream stable-diffusion diffusion-models real-time

real-time-latent-consistency-model's Introduction

title emoji colorFrom colorTo sdk pinned suggested_hardware disable_embedding
Real-Time Latent Consistency Model Image-to-Image ControlNet
🖼️🖼️
gray
indigo
docker
false
a10g-small
true

Real-Time Latent Consistency Model

This demo showcases Latent Consistency Model (LCM) using Diffusers with a MJPEG stream server. You can read more about LCM + LoRAs with diffusers here.

You need a webcam to run this demo. 🤗

See a collecting with live demos here

Running Locally

You need CUDA and Python 3.10, Node > 19, Mac with an M1/M2/M3 chip or Intel Arc GPU

Install

python -m venv venv
source venv/bin/activate
pip3 install -r server/requirements.txt
cd frontend && npm install && npm run build && cd ..
python server/main.py --reload --pipeline img2imgSDTurbo 

Don't forget to fuild the frontend!!!

cd frontend && npm install && npm run build && cd ..

Pipelines

You can build your own pipeline following examples here here,

LCM

Image to Image

python server/main.py --reload --pipeline img2img 

LCM

Text to Image

python server/main.py --reload --pipeline txt2img 

Image to Image ControlNet Canny

python server/main.py --reload --pipeline controlnet 

LCM + LoRa

Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. Learn more here or technical report

Image to Image ControlNet Canny LoRa

python server/main.py --reload --pipeline controlnetLoraSD15

or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images

python server/main.py --reload --pipeline controlnetLoraSDXL

Text to Image

python server/main.py --reload --pipeline txt2imgLora
python server/main.py --reload --pipeline txt2imgLoraSDXL

Available Pipelines

img2img
txt2img
controlnet
txt2imgLora
controlnetLoraSD15

controlnetLoraSDXL
txt2imgLoraSDXL

img2imgSDXLTurbo
controlnetSDXLTurbo

img2imgSDTurbo
controlnetSDTurbo

controlnetSegmindVegaRT
img2imgSegmindVegaRT

Setting environment variables

  • --host: Host address (default: 0.0.0.0)
  • --port: Port number (default: 7860)
  • --reload: Reload code on change
  • --max-queue-size: Maximum queue size (optional)
  • --timeout: Timeout period (optional)
  • --safety-checker: Enable Safety Checker (optional)
  • --torch-compile: Use Torch Compile
  • --use-taesd / --no-taesd: Use Tiny Autoencoder
  • --pipeline: Pipeline to use (default: "txt2img")
  • --ssl-certfile: SSL Certificate File (optional)
  • --ssl-keyfile: SSL Key File (optional)
  • --debug: Print Inference time
  • --compel: Compel option
  • --sfast: Enable Stable Fast
  • --onediff: Enable OneDiff

If you run using bash build-run.sh you can set PIPELINE variables to choose the pipeline you want to run

PIPELINE=txt2imgLoraSDXL bash build-run.sh

and setting environment variables

TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python server/main.py --reload --pipeline txt2imgLoraSDXL

If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my comment

openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
python server/main.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem

Docker

You need NVIDIA Container Toolkit for Docker, defaults to `controlnet``

docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live

reuse models data from host to avoid downloading them again, you can change ~/.cache/huggingface to any other directory, but if you use hugingface-cli locally, you can share the same cache

docker run -ti -p 7860:7860 -e HF_HOME=/data -v ~/.cache/huggingface:/data  --gpus all lcm-live

or with environment variables

docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live

Demo on Hugging Face

lcm-real.mp4

real-time-latent-consistency-model's People

Contributors

cocktailpeanut avatar nuullll avatar radames avatar strint avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

real-time-latent-consistency-model's Issues

controlnet lora is not working !

i have a problem only imgtoimg works all other options not working here is the log while running controlnet lora

C:\Users\Genesis\Desktop\ai>conda.bat activate
←[32mINFO←[0m: Will watch for changes in these directories: ['C:\Users\Genesis\github\Real-Time-Latent-Consistency-Model']
←[32mINFO←[0m: Uvicorn running on ←[1mhttp://127.0.0.1:7860←[0m (Press CTRL+C to quit)
←[32mINFO←[0m: Started reloader process [←[36m←[1m15520←[0m] using ←[36m←[1mStatReload←[0m
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cu118)
Python 3.10.11 (you have 3.10.9)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
TIMEOUT: 0.0
SAFETY_CHECKER: None
MAX_QUEUE_SIZE: 0
device: cuda
unet\diffusion_pytorch_model.safetensors not found
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:11<00:00, 1.95s/it]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:01<00:00, 5.41it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
unet\diffusion_pytorch_model.safetensors not found
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:16<00:00, 2.74s/it]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

binary_path: C:\Users\Genesis\anaconda3\envs\lcm\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll
CUDA SETUP: Loading binary C:\Users\Genesis\anaconda3\envs\lcm\lib\site-packages\bitsandbytes\cuda_setup\libbitsandbytes_cuda116.dll...
INFO: Started server process [18472]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:64657 - "GET /queue_size HTTP/1.1" 200 OK
INFO: 127.0.0.1:64657 - "GET /?__theme=dark HTTP/1.1" 200 OK
INFO: 127.0.0.1:64661 - "GET /queue_size HTTP/1.1" 200 OK
INFO: ('127.0.0.1', 64662) - "WebSocket /ws" [accepted]
New user connected: 1984d996-c502-49d8-b410-fe535fbffa2a
INFO: connection open
INFO: 127.0.0.1:64661 - "GET /stream/1984d996-c502-49d8-b410-fe535fbffa2a HTTP/1.1" 200 OK
INFO: 127.0.0.1:64663 - "GET /queue_size HTTP/1.1" 200 OK
INFO: 127.0.0.1:64663 - "GET /queue_size HTTP/1.1" 200 OK
INFO: 127.0.0.1:64665 - "GET /queue_size HTTP/1.1" 200 OK
ERROR:root:Error: 1005
Traceback (most recent call last):
File "C:\Users\Genesis\github\Real-Time-Latent-Consistency-Model\app-controlnetlora.py", line 286, in handle_websocket_data
data = await websocket.receive_bytes()
File "C:\Users\Genesis\AppData\Roaming\Python\Python310\site-packages\starlette\websockets.py", line 122, in receive_bytes
self._raise_on_disconnect(message)
File "C:\Users\Genesis\AppData\Roaming\Python\Python310\site-packages\starlette\websockets.py", line 105, in _raise_on_disconnect
raise WebSocketDisconnect(message["code"])
starlette.websockets.WebSocketDisconnect: 1005
User disconnected: 1984d996-c502-49d8-b410-fe535fbffa2a
INFO: connection closed
INFO: 127.0.0.1:64703 - "GET /?__theme=dark HTTP/1.1" 200 OK
INFO: 127.0.0.1:64703 - "GET /favicon.ico HTTP/1.1" 404 Not Found
INFO: 127.0.0.1:64707 - "GET /queue_size HTTP/1.1" 200 OK
INFO: ('127.0.0.1', 64710) - "WebSocket /ws" [accepted]
New user connected: b6e74ba1-dd78-42b8-95cc-82a97b2ee09e
INFO: connection open
INFO: 127.0.0.1:64707 - "GET /stream/b6e74ba1-dd78-42b8-95cc-82a97b2ee09e HTTP/1.1" 200 OK
INFO: 127.0.0.1:64711 - "GET /queue_size HTTP/1.1" 200 OK
INFO: 127.0.0.1:64711 - "GET /queue_size HTTP/1.1" 200 OK

here is the bat file

@echo on
call "C:\Users\Genesis\anaconda3\condabin\activate.bat"
cd ""C:\Users\Genesis\github\Real-Time-Latent-Consistency-Model"
call activate lcm

uvicorn app-controlnetlora:app --host 127.0.0.1 --port 7860 --reload
::uvicorn app-img2img:app --host 127.0.0.1 --port 7860 --reload
::uvicorn app-txt2img:app --host 127.0.0.1 --port 7860 --reload
::TIMEOUT=120 SAFETY_CHECKER=TRUE MAX_QUEUE_SIZE=4 uvicorn app-img2img:app --host 127.0.0.1 --port 7860 --reload
pause

ValueError: `num_inference_steps`: 4 cannot be larger than `original_inference_steps`

Catching this error when playing with the different inference steps slides.

Trace

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/fastapi/applications.py", line 1115, in __call__
    await super().__call__(scope, receive, send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
    raise exc
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
    raise exc
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
    raise e
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
    await response(scope, receive, send)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/responses.py", line 270, in __call__
    async with anyio.create_task_group() as task_group:
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in __aexit__
    raise exceptions[0]
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
    await func()
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response
    async for chunk in self.body_iterator:
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/app-txt2img.py", line 196, in generate
    image = predict(params)
            ^^^^^^^^^^^^^^^
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/app-txt2img.py", line 110, in predict
    results = pipe(
              ^^^^^
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py", line 670, in __call__
    self.scheduler.set_timesteps(num_inference_steps, device, original_inference_steps=original_inference_steps)
  File "/Users/korymath/Documents/code/Real-Time-Latent-Consistency-Model/venv/lib/python3.11/site-packages/diffusers/schedulers/scheduling_lcm.py", line 364, in set_timesteps
    raise ValueError(
ValueError: `num_inference_steps`: 4 cannot be larger than `original_inference_steps`: 2 because the final timestep schedule will be a subset of the `original_inference_steps`-sized initial timestep schedule.

error related to "public" directory

Last week I was able to run this code at commit ee4d659 on a 3080 machine I have access to, but today I tried on a 4090 machine at commit c640c48 and I got the following errors after following the same steps. I wonder if it's related to a recent change in the code base, or if it's a different setup between the two machines?

Edit: I was able to confirm that the commit ee4d659 works on the new system, as do the next two commits. I will try to hunt down the exact issue.

This is after running:

docker build -t lcm-live .
docker run -ti -p 7860:7860 --gpus all lcm-live
> Using @sveltejs/adapter-static
error during build:
Error: EACCES: permission denied, mkdir '../public/_app/immutable/assets'
    at Object.mkdirSync (node:fs:1379:3)
    at mkdirp (file:///home/user/app/frontend/node_modules/@sveltejs/kit/src/utils/filesystem.js:7:6)
    at go (file:///home/user/app/frontend/node_modules/@sveltejs/kit/src/utils/filesystem.js:58:4)
    at file:///home/user/app/frontend/node_modules/@sveltejs/kit/src/utils/filesystem.js:55:5
    at Array.forEach (<anonymous>)
    at go (file:///home/user/app/frontend/node_modules/@sveltejs/kit/src/utils/filesystem.js:54:25)
    at file:///home/user/app/frontend/node_modules/@sveltejs/kit/src/utils/filesystem.js:55:5
    at Array.forEach (<anonymous>)
    at go (file:///home/user/app/frontend/node_modules/@sveltejs/kit/src/utils/filesystem.js:54:25)
    at file:///home/user/app/frontend/node_modules/@sveltejs/kit/src/utils/filesystem.js:55:5

frontend build failed
 exit 1

pipeline: controlnet 
DEVICE: cuda
TORCH_DTYPE: torch.float16
PIPELINE: controlnet
SAFETY_CHECKER: False
TORCH_COMPILE: False
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
0it [00:00, ?it/s]
config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 996/996 [00:00<00:00, 10.3MB/s]
diffusion_pytorch_model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.45G/1.45G [00:39<00:00, 36.9MB/s]
model_index.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 588/588 [00:00<00:00, 5.94MB/s]
tokenizer/special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 133/133 [00:00<00:00, 2.47MB/s]
scheduler/scheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 539/539 [00:00<00:00, 11.2MB/s]
text_encoder/config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 610/610 [00:00<00:00, 12.9MB/s]
tokenizer/tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 765/765 [00:00<00:00, 16.0MB/s]
(…)ature_extractor/preprocessor_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 518/518 [00:00<00:00, 11.6MB/s]
vae/config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 651/651 [00:00<00:00, 3.66MB/s]
unet/config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.73k/1.73k [00:00<00:00, 34.6MB/s]
tokenizer/merges.txt: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 824kB/s]
tokenizer/vocab.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 1.56MB/s]
model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 492M/492M [00:21<00:00, 23.1MB/s]
diffusion_pytorch_model.safetensors: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 335M/335M [00:27<00:00, 12.0MB/s]
diffusion_pytorch_model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.44G/3.44G [01:52<00:00, 30.6MB/s]
Fetching 13 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 13/13 [01:53<00:00,  8.73s/it]
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 32.79it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet_img2img.StableDiffusionControlNetImg2ImgPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
Traceback (most recent call last):
  File "/home/user/app/run.py", line 5, in <module>
    uvicorn.run(
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run
    server.run()
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "/home/user/.local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/user/app/app.py", line 21, in <module>
    init_app(app, user_data, args, pipeline)
  File "/home/user/app/app_init.py", line 158, in init_app
    os.makedirs("public")
  File "/usr/lib/python3.10/os.py", line 225, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: 'public'

Diffusers requirements needed an bump.

On my side, diffusers threw an error:

(venv) (base) ws2testing@DESKTOP-EU7ASQA:~/ws2testing/Real-Time-Latent-Consistency-Model$ python server/main.py --reload --pipeline controlnetHyperSD

host: 0.0.0.0
port: 7860
reload: True
max_queue_size: 0
timeout: 0.0
safety_checker: False
torch_compile: False
taesd: True
pipeline: controlnetHyperSD
ssl_certfile: None
ssl_keyfile: None
sfast: False
onediff: False
compel: False
debug: False

Device: cuda
torch_dtype: torch.float16
/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
Traceback (most recent call last):
File "/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/server/main.py", line 165, in
pipeline_class = get_pipeline_class(config.pipeline)
File "/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/server/util.py", line 9, in get_pipeline_class
module = import_module(f"pipelines.{pipeline_name}")
File "/usr/lib/python3.10/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/server/pipelines/controlnetHyperSD.py", line 1, in
from diffusers import (
ImportError: cannot import name 'TCDScheduler' from 'diffusers' (/home/wsl2testing/wsl2testing/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/diffusers/init.py)

Device : CPU ?

PS F:\AI\LCM-Realtime\Real-Time-Latent-Consistency-Model> uvicorn "app-controlnet:app" --host 0.0.0.0 --port 7860 --reload
INFO: Will watch for changes in these directories: ['F:\AI\LCM-Realtime\Real-Time-Latent-Consistency-Model']
INFO: Uvicorn running on http://0.0.0.0:7860 (Press CTRL+C to quit)
INFO: Started reloader process [47332] using StatReload
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cpu)
Python 3.11.6 (you have 3.11.0)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
TIMEOUT: 0.0
SAFETY_CHECKER: None
MAX_QUEUE_SIZE: 0
device: cpu
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 24.82it/s]
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
INFO: Started server process [48392]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:65255 - "GET /queue_size HTTP/1.1" 200 OK
INFO: 127.0.0.1:65261 - "GET /queue_size HTTP/1.1" 200 OK

I get the above error when trying to launch the controlnet app

AttributeError: module diffusers has no attribute LCMScheduler

Traceback (most recent call last):s: 100%|████████████████████████████████████████| 3.44G/3.44G [02:13<00:00, 32.7MB/s]
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\uvicorn\_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\uvicorn\server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\asyncio\base_events.py", line 649, in run_until_complete
    return future.result()
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\uvicorn\server.py", line 68, in serve
    config.load()
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\uvicorn\config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\LCMRT\Real-Time-Latent-Consistency-Model\app-img2img.py", line 62, in <module>
    pipe = DiffusionPipeline.from_pretrained(
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1105, in from_pretrained
    loaded_sub_model = load_sub_model(
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 391, in load_sub_model
    class_obj, class_candidates = get_class_obj_and_candidates(
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 319, in get_class_obj_and_candidates
    class_obj = getattr(library, class_name)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\diffusers\utils\import_utils.py", line 677, in __getattr__
    raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module diffusers has no attribute LCMScheduler

run in colab

Could you add support to run it in colab, I see that it uses gradio maybe use something like share=true

torch.compile not work under windows?

I'm try to use the torch.compile option to improve my performance, but the system give me this error:

device: cuda
Loading pipeline components...: 100%|##########| 5/5 [00:00<00:00, 11.23it/s]
Process SpawnProcess-1:
Traceback (most recent call last):
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
    return future.result()
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\server.py", line 68, in serve
    config.load()
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "L:\Real-Time-Latent-Consistency-Model\app-controlnet.py", line 110, in <module>
    pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\__init__.py", line 1723, in compile
    return torch._dynamo.optimize(backend=backend, nopython=fullgraph, dynamic=dynamic, disable=disable)(model)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 583, in optimize
    check_if_dynamo_supported()
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 535, in check_if_dynamo_supported
    raise RuntimeError("Windows not yet supported for torch.compile")
RuntimeError: Windows not yet supported for torch.compile

Actually I'm under Windows 10 and I have a RTX 3090. TNX for support

macOS Issue: OBS does not show up under Camera selection

Not sure if this is recent macOS and OBS issue, but no matter what I could not see the Virtual Camera from OBS. I thought it would be straight forward. Maybe my setting is wrong?

Screenshot 2023-11-15 at 1 26 26 pm

Maybe someone can give suggestion?

UPDATE:
Weird thing however, seems to work on Chrome, but "Camera Selection" is not really right. I can' see option from my iPhone or from OBS. It simply show up.

TypeError: slice indices must be integers or None or have an index method

When trying to use my web cam after the latest updates to diffusers I'm getting the following error:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 408, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\fastapi\applications.py", line 1115, in __call__
    await super().__call__(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\middleware\cors.py", line 83, in __call__
    await self.app(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__    raise exc
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__    await self.app(scope, receive, sender)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\routing.py", line 69, in app
    await response(scope, receive, send)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\responses.py", line 270, in __call__
    async with anyio.create_task_group() as task_group:
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\anyio\_backends\_asyncio.py", line 597, in __aexit__
    raise exceptions[0]
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\responses.py", line 273, in wrap
    await func()
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\starlette\responses.py", line 262, in stream_response
    async for chunk in self.body_iterator:
  File "C:\LCMRT\Real-Time-Latent-Consistency-Model\app-img2img.py", line 199, in generate
    image = predict(
  File "C:\LCMRT\Real-Time-Latent-Consistency-Model\app-img2img.py", line 105, in predict
    results = pipe(
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\Administrator\.cache\huggingface\modules\diffusers_modules\local\latent_consistency_img2img.py", line 368, in __call__
    self.scheduler.set_timesteps(strength, num_inference_steps, lcm_origin_steps)
  File "C:\ProgramData\miniconda3\envs\LCMRT\lib\site-packages\diffusers\schedulers\scheduling_lcm.py", line 377, in set_timesteps
    timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps]
TypeError: slice indices must be integers or None or have an __index__ method

Can you explain how to install?

I'm sorry but I couldn't install locally to test, I have stable diffusion (A1111 and comfyui) but I couldn't install this, I looked for tutorials on YouTube from this repository but I didn't find anything, do you have anything?

where to upload the image

Screenshot from 2023-12-06 10-50-13I have deployed a img2img application, i have clicked everywhere but i could not upload the origin image. The text2img can run successfully.

Webcam Lag, script activate and xformers/cuda/torch, url

Fantastic work, really works well! Couple of issues and workarounds:

On windows I used instead of source venv/bin/activate, this:

venv\scripts\activate

Then I used:

pip install torch==2.1.0+cu121 -f https://download.pytorch.org/whl/torch_stable.html

to get xformers working properly. Still get a no triton error, which I ignored.

Then I was noticing huge webcam lag, so I modifed each of the files, changing the thread sleep time from 1/120 so 8ms to 333ms. I was only getting 3fps or so anyway:

await asyncio.sleep(40.0 / 120.0)

in app-controlnetlora.py or app-img2img.py... so change whichever you are using. It seemed to be queuing up many many frames without this, creating huge lag.

Couldn't get 0.0.0.0 to work, so launch I using this, perhaps should be the default, so the browser works on https://127.0.01:7860

uvicorn "app-controlnetlora:app" --host 127.0.0.1 --port 7860 --reload

Encountering Webcam Detection Issue on Ubuntu 22.04

Hello,
I've completed the installation process on Ubuntu 22.04. However, when I attempt to run the application, no webcam appears in the selectable device list, indicating that the webcam is not initializing or being recognized.

image

Error loading ASGI app

I was tinkering with it, and I think I was doing something with WebUI Automatic 1111 as well.

Now when running the Real Time LCM repo, I kept gettting "Error loading ASGI app" what could have happened?

Issue installing on macOS M2

I kept getting this error when installing requirements:

pip install -r requirements.txt
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting diffusers==0.23.0 (from -r requirements.txt (line 1))
  Using cached diffusers-0.23.0-py3-none-any.whl.metadata (17 kB)
Collecting transformers==4.34.1 (from -r requirements.txt (line 2))
  Using cached transformers-4.34.1-py3-none-any.whl.metadata (121 kB)
Collecting gradio==3.50.2 (from -r requirements.txt (line 3))
  Using cached gradio-3.50.2-py3-none-any.whl.metadata (17 kB)
Requirement already satisfied: torch==2.1.0 in ./venv/lib/python3.10/site-packages (from -r requirements.txt (line 5)) (2.1.0)
Collecting fastapi==0.104.0 (from -r requirements.txt (line 6))
  Using cached fastapi-0.104.0-py3-none-any.whl.metadata (24 kB)
Collecting uvicorn==0.23.2 (from -r requirements.txt (line 7))
  Using cached uvicorn-0.23.2-py3-none-any.whl.metadata (6.2 kB)
Collecting Pillow==10.1.0 (from -r requirements.txt (line 8))
  Using cached Pillow-10.1.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (9.5 kB)
Collecting accelerate==0.24.0 (from -r requirements.txt (line 9))
  Using cached accelerate-0.24.0-py3-none-any.whl.metadata (18 kB)
Collecting compel==2.0.2 (from -r requirements.txt (line 10))
  Using cached compel-2.0.2-py3-none-any.whl.metadata (12 kB)
Collecting controlnet-aux==0.0.7 (from -r requirements.txt (line 11))
  Using cached controlnet_aux-0.0.7.tar.gz (202 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting peft==0.6.0 (from -r requirements.txt (line 12))
  Using cached peft-0.6.0-py3-none-any.whl.metadata (23 kB)
Collecting xformers (from -r requirements.txt (line 13))
  Using cached xformers-0.0.22.post7.tar.gz (3.8 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [17 lines of output]
      Traceback (most recent call last):
        File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
          self.run_setup()
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 507, in run_setup
          super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in run_setup
          exec(code, locals())
        File "<string>", line 23, in <module>
      ModuleNotFoundError: No module named 'torch'
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

pix2pixTurbo: "AttributeError: 'AutoencoderKL' object has no attribute 'add_adapter'"

Unable to run python server/main.py --reload --pipeline pix2pixTurbo

(venv) justin@DynamicEVO:~/AI/Real-Time-Latent-Consistency-Model$ python server/main.py --reload --pipeline pix2pixTurbo


host: 0.0.0.0
port: 7860
reload: True
max_queue_size: 0
timeout: 0.0
safety_checker: False
torch_compile: False
taesd: True
pipeline: pix2pixTurbo
ssl_certfile: None
ssl_keyfile: None
sfast: False
onediff: False
compel: False
debug: False


Device: cuda
torch_dtype: torch.float16
/home/justin/AI/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/transformers/utils/generic.py:441: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  _torch_pytree._register_pytree_node(
/home/justin/AI/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  _torch_pytree._register_pytree_node(
/home/justin/AI/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/transformers/utils/generic.py:309: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  _torch_pytree._register_pytree_node(
/home/justin/AI/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
/home/justin/AI/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/diffusers/utils/outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
Traceback (most recent call last):
  File "/home/justin/AI/Real-Time-Latent-Consistency-Model/server/main.py", line 166, in <module>
    pipeline = pipeline_class(config, device, torch_dtype)
  File "/home/justin/AI/Real-Time-Latent-Consistency-Model/server/pipelines/pix2pixTurbo.py", line 98, in __init__
    self.model = Pix2Pix_Turbo("edge_to_image")
  File "/home/justin/AI/Real-Time-Latent-Consistency-Model/server/pipelines/pix2pix/pix2pix_turbo.py", line 134, in __init__
    vae.add_adapter(vae_lora_config, adapter_name="vae_skip")
  File "/home/justin/AI/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/diffusers/models/modeling_utils.py", line 220, in __getattr__
    return super().__getattr__(name)
  File "/home/justin/AI/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1688, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'AutoencoderKL' object has no attribute 'add_adapter'

Docker volumes

What is path to model folders?
Is it possible to map volumes, so it wont download models every time?

Missing LICENSE

I see you have no LICENSE file for this project. The default is copyright.

I would suggest releasing the code under the Apache-2.0 license to match one of the license headers found in this project and so that others are encouraged to contribute changes back to your project.

"ReferenceError: Blob is not defined" during setting up frontend

Thanks for this wonderful work! I tried to run the demo (in a wsl2 environment) however ran into this problem when trying to setup the frontend:

npm install && npm run build && cd ..

up to date, audited 266 packages in 3s

59 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

> [email protected] build
> vite build


vite v4.5.0 building SSR bundle for production...
✓ 94 modules transformed.

vite v4.5.0 building for production...
✓ 87 modules transformed.
.svelte-kit/output/client/_app/version.json                              0.03 kB │ gzip:  0.05 kB
.svelte-kit/output/client/.vite/manifest.json                            2.77 kB │ gzip:  0.48 kB
.svelte-kit/output/client/_app/immutable/assets/2.4b49e46c.css           0.88 kB │ gzip:  0.27 kB
.svelte-kit/output/client/_app/immutable/assets/0.82666576.css           9.71 kB │ gzip:  2.75 kB
.svelte-kit/output/client/_app/immutable/nodes/0.5ed3b7c1.js             0.60 kB │ gzip:  0.38 kB
.svelte-kit/output/client/_app/immutable/chunks/index.57cd3851.js        0.92 kB │ gzip:  0.57 kB
.svelte-kit/output/client/_app/immutable/nodes/1.0b2d00ed.js             1.03 kB │ gzip:  0.59 kB
.svelte-kit/output/client/_app/immutable/chunks/singletons.49bed12e.js   2.45 kB │ gzip:  1.26 kB
.svelte-kit/output/client/_app/immutable/chunks/scheduler.d303939e.js    2.49 kB │ gzip:  1.16 kB
.svelte-kit/output/client/_app/immutable/entry/app.2f014e65.js           5.94 kB │ gzip:  2.34 kB
.svelte-kit/output/client/_app/immutable/chunks/index.b58b6c9b.js        6.16 kB │ gzip:  2.60 kB
.svelte-kit/output/client/_app/immutable/entry/start.35296672.js        24.87 kB │ gzip:  9.81 kB
.svelte-kit/output/client/_app/immutable/nodes/2.45ba902c.js            72.26 kB │ gzip: 21.33 kB
✓ built in 2.90s
ReferenceError: Blob is not defined
    at file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/.svelte-kit/output/server/entries/pages/_page.svelte.js:22:49
    at ModuleJob.run (node:internal/modules/esm/module_job:193:25)
    at async Promise.all (index 0)
    at async ESMLoader.import (node:internal/modules/esm/loader:530:24)
    at async Module.component (file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/.svelte-kit/output/server/nodes/2.js:5:59)
    at async Promise.all (index 1)
    at async render_response (file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/.svelte-kit/output/server/index.js:1322:21)
    at async render_page (file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/.svelte-kit/output/server/index.js:2162:12)
    at async resolve (file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/.svelte-kit/output/server/index.js:2743:24)
    at async respond (file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/.svelte-kit/output/server/index.js:2629:22)

node:internal/event_target:1011
  process.nextTick(() => { throw err; });
                           ^
Error: 500 /
To suppress or handle this error, implement `handleHttpError` in https://kit.svelte.dev/docs/configuration#prerender
    at file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/node_modules/@sveltejs/kit/src/core/config/options.js:212:13
    at file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/node_modules/@sveltejs/kit/src/core/postbuild/prerender.js:64:25
    at save (file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/node_modules/@sveltejs/kit/src/core/postbuild/prerender.js:403:4)
    at visit (file:///mnt/c/Users/vince/Documents/foco/code/Real-Time-Latent-Consistency-Model/frontend/node_modules/@sveltejs/kit/src/core/postbuild/prerender.js:236:3)
Emitted 'error' event on Worker instance at:
    at Worker.[kOnErrorMessage] (node:internal/worker:298:10)
    at Worker.[kOnMessage] (node:internal/worker:309:37)
    at MessagePort.<anonymous> (node:internal/worker:205:57)
    at MessagePort.[nodejs.internal.kHybridDispatch] (node:internal/event_target:736:20)
    at MessagePort.exports.emitMessage (node:internal/per_context/messageport:23:28)

May I ask if you have any clue what the problem is? Thanks for your help in advance!

I cant run it local

i have this error:
ctx.load_cert_chain(certfile, keyfile, get_password)
FileNotFoundError: [Errno 2] No such file or directory

Any idea of what i can do to solve it?

404 Not Found

Hello! I deployed the project to local successfully last week. But when I deploy the new version today, the web page cannot be loaded and always reports error 404: Not Found. I want to know how to solve this problem.
屏幕截图(2174)
屏幕截图(2175)

Huggingface webcam option locally.

Hello, I've noticed in the huggingface demo, it asks for my webcam permission, and I have an option to select my webcam in the dropdown, however, I do not seem to have this option with the cloned repo, is there a way to add this feature locally?

Latest build gives error on uvicorn launch

Installing on Windows 10 using env produces this error:

Process SpawnProcess-1:
Traceback (most recent call last):
  File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
    return future.result()
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\server.py", line 68, in serve
    config.load()
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "C:\Users\admin\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\app-img2img.py", line 81, in <module>
    pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\__init__.py", line 1723, in compile
    return torch._dynamo.optimize(backend=backend, nopython=fullgraph, dynamic=dynamic, disable=disable)(model)
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 583, in optimize
    check_if_dynamo_supported()
  File "M:\Accessories\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 535, in check_if_dynamo_supported
    raise RuntimeError("Windows not yet supported for torch.compile")
RuntimeError: Windows not yet supported for torch.compile

Python version: 3.10.6
x64 bit architecture
RTX 3060, Intel CPU

Model Speed Up

Hello, I'm using controlnetSDXLTurbo to do some project.
My machine is RTX 3090.

I'm wondering how can I speed up the result or do you have any recommend model can do that, thanks.

Webcam not detected via GUI

Hello

Not sure if it's just me but my webcam isn't being detected for some reason. Other applications on my machine are able to detect my camera. Everything on GUI works fine. Using an M2.

image

webgui can't start

python run.py --reload --pipeline controlnetLoraSD15

image
Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'enumerateDevices')
    at Object.enumerateDevices (2.7386ed29.js:1:39137)
    at HTMLButtonElement.B (2.7386ed29.js:8:2589)
    at scheduler.d303939e.js:1:1497
    at Array.forEach (<anonymous>)
    at HTMLButtonElement.N (scheduler.d303939e.js:1:1484)
    at HTMLButtonElement.l (2.7386ed29.js:1:3003)

No image generated with `txt2imglora`

Running this:

TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python -m uvicorn "app-txt2imglora:app" --host 0.0.0.0 --port 7860 --reload

And then clicking Start for the default prompt, I'm not seeing any images produced

The terminal shows lots of:

INFO:     127.0.0.1:52917 - "GET /queue_size HTTP/1.1" 200 OK
INFO:     127.0.0.1:52917 - "GET /queue_size HTTP/1.1" 200 OK
INFO:     127.0.0.1:52977 - "GET /queue_size HTTP/1.1" 200 OK
...

Gave it around 20 minutes on an M2.

Any ideas?

How to add models?

What is the best method of adding other models to the demo.
What models will be appropriate?

link 404

image

this great work but cant find "here" link

No Webcam

Hello Everyone,
for some reason i cant find the option to select a webcam on brave.

image

Any idea on why its not figuring it out ?

Error with uvicorn unexpected extra argument

$ uvicorn "app-img2img:app" --host 0.0.0.0 --port 7860 –reload
Usage: uvicorn [OPTIONS] APP
Try 'uvicorn --help' for help.

Error: Got unexpected extra argument (▒reload)
(venv)

After installation this error happening when try to lunch the app, any hint? TNX

Safety Checker for SDXL Turbo

Hello, I'm trying to use safety checker (--safe-checker True) for the StableDiffusionXLPipeline / StableDiffusionXLImg2ImgPipeline / StableDiffusionXLControlNetPipeline / StableDiffusionXLControlNetImg2ImgPipeline but it doesn't work for them. I just wondering if there's any other ways to avoid nsfw, thanks.

Doesn't start after install, can't get through to the GUI

When running "uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload" i get the below error

C:\Users\Oliver\Documents\Github\Real-Time-Latent-Consistency-Model>uvicorn "app-txt2img:app" --host 0.0.0.0 --port 7860 --reload
←[32mINFO←[0m: Will watch for changes in these directories: ['C:\Users\Oliver\Documents\Github\Real-Time-Latent-Consistency-Model']
←[32mINFO←[0m: Uvicorn running on ←[1mhttp://0.0.0.0:7860←[0m (Press CTRL+C to quit)
←[32mINFO←[0m: Started reloader process [←[36m←[1m18300←[0m] using ←[36m←[1mWatchFiles←[0m
Process SpawnProcess-1:
Traceback (most recent call last):
File "C:\Python\lib\site-packages\tensorboard\compat_init_.py", line 42, in tf
from tensorboard.compat import notf # noqa: F401
ImportError: cannot import name 'notf' from 'tensorboard.compat' (C:\Python\lib\site-packages\tensorboard\compat_init_.py)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Python\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\Python\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self.kwargs)
File "C:\Python\lib\site-packages\uvicorn_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "C:\Python\lib\site-packages\uvicorn\server.py", line 59, in run
return asyncio.run(self.serve(sockets=sockets))
File "C:\Python\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Python\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:\Python\lib\site-packages\uvicorn\server.py", line 66, in serve
config.load()
File "C:\Python\lib\site-packages\uvicorn\config.py", line 471, in load
self.loaded_app = import_from_string(self.app)
File "C:\Python\lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "C:\Python\lib\importlib_init
.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in find_and_load
File "", line 1006, in find_and_load_unlocked
File "", line 688, in load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "C:\Users\Oliver\Documents\Github\Real-Time-Latent-Consistency-Model\app-txt2img.py", line 16, in
from diffusers import DiffusionPipeline, AutoencoderTiny
File "C:\Python\lib\site-packages\diffusers_init
.py", line 3, in
from .configuration_utils import ConfigMixin
File "C:\Python\lib\site-packages\diffusers\configuration_utils.py", line 34, in
from .utils import (
File "C:\Python\lib\site-packages\diffusers\utils_init
.py", line 21, in
from .accelerate_utils import apply_forward_hook
File "C:\Python\lib\site-packages\diffusers\utils\accelerate_utils.py", line 24, in
import accelerate
File "C:\Python\lib\site-packages\accelerate_init.py", line 3, in
from .accelerator import Accelerator
File "C:\Python\lib\site-packages\accelerate\accelerator.py", line 39, in
from .tracking import LOGGER_TYPE_TO_CLASS, GeneralTracker, filter_trackers
File "C:\Python\lib\site-packages\accelerate\tracking.py", line 42, in
from torch.utils import tensorboard
File "C:\Python\lib\site-packages\torch\utils\tensorboard_init.py", line 12, in
from .writer import FileWriter, SummaryWriter # noqa: F401
File "C:\Python\lib\site-packages\torch\utils\tensorboard\writer.py", line 16, in
from .embedding import (
File "C:\Python\lib\site-packages\torch\utils\tensorboard_embedding.py", line 9, in
HAS_GFILE_JOIN = hasattr(tf.io.gfile, "join")
File "C:\Python\lib\site-packages\tensorboard\lazy.py", line 65, in getattr
return getattr(load_once(self), attr_name)
File "C:\Python\lib\site-packages\tensorboard\lazy.py", line 97, in wrapper
cache[arg] = f(arg)
File "C:\Python\lib\site-packages\tensorboard\lazy.py", line 50, in load_once
module = load_fn()
File "C:\Python\lib\site-packages\tensorboard\compat_init
.py", line 45, in tf
import tensorflow
File "C:\Python\lib\site-packages\tensorflow_init
.py", line 37, in
from tensorflow.python.tools import module_util as module_util
File "C:\Python\lib\site-packages\tensorflow\python_init
.py", line 37, in
from tensorflow.python.eager import context
File "C:\Python\lib\site-packages\tensorflow\python\eager\context.py", line 29, in
from tensorflow.core.framework import function_pb2
File "C:\Python\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 16, in
from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
File "C:\Python\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 16, in
from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
File "C:\Python\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 16, in
from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
File "C:\Python\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 16, in
from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
File "C:\Python\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py", line 36, in
_descriptor.FieldDescriptor(
File "C:\Python\lib\site-packages\google\protobuf\descriptor.py", line 544, in new
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:

  1. Downgrade the protobuf package to 3.20.x or lower.
  2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

Any help is appreciated, want to test out this gem with my webcam!

Windows 10 latest
AMD Threadripper
2x3090 Nvidia GPU
64GB RAM
Installed in My documents with full rights on an NVME-SSD

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.