dream80 / roop_colab Goto Github PK
View Code? Open in Web Editor NEWSingle picture, one click, video face swap! Use Colab scripts, no need to burn the graphics card, no need to go to the sauna!
Single picture, one click, video face swap! Use Colab scripts, no need to burn the graphics card, no need to go to the sauna!
换脸后容易闪烁,是不是相似度问题,哪里修改?
The following error occurred while I was processing the video :
Error
Unexpected token '<', " <h"... is not valid JSON
(using colab)
cmd:run.py --execution-provider cuda -s /content/drive/MyDrive/1.jpg -t /content/drive/MyDrive/11.mp4 -o /content/drive/MyDrive/out/out11.mp4 --frame-processor face_swapper --output-video-encoder libx264 --output-video-quality 35 --keep-fps --many-faces --temp-frame-format jpg --temp-frame-quality 0
EP Error /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=-412879360 ; hostname=a1e2af3b9974 ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=238 ; expr=cudaSetDevice(info_.device_id);
when using ['CUDAExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=-412879360 ; hostname=a1e2af3b9974 ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=238 ; expr=cudaSetDevice(info.device_id);
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/content/roop/run.py", line 6, in
core.run()
File "/content/roop/roop/core.py", line 217, in run
start()
File "/content/roop/roop/core.py", line 133, in start
if not frame_processor.pre_start():
File "/content/roop/roop/processors/frame/face_swapper.py", line 45, in pre_start
elif not get_one_face(cv2.imread(roop.globals.source_path)):
File "/content/roop/roop/face_analyser.py", line 30, in get_one_face
many_faces = get_many_faces(frame)
File "/content/roop/roop/face_analyser.py", line 41, in get_many_faces
return get_face_analyser().get(frame)
File "/content/roop/roop/face_analyser.py", line 18, in get_face_analyser
FACE_ANALYSER = insightface.app.FaceAnalysis(name='buffalo_l', providers=roop.globals.execution_providers)
File "/usr/local/lib/python3.10/dist-packages/insightface/app/face_analysis.py", line 31, in init
model = model_zoo.get_model(onnx_file, kwargs)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 40, in get_model
session = PickableInferenceSession(self.onnx_file, kwargs)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 25, in init
super().init(model_path, kwargs)
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 394, in init
raise fallback_error from e
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 389, in init
self._create_inference_session(self._fallback_providers, None)
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:121 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:114 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char*, const char*, int) [with ERRTYPE = cudaError; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=637870059 ; hostname=a1e2af3b9974 ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_execution_provider.cc ; line=238 ; expr=cudaSetDevice(info_.device_id);
下载以来后运行提示这个
Windows PowerShell
版权所有 (C) 2009 Microsoft Corporation。保留所有权利。
PS D:\Code\Python\roop-main> & C:/Users/Administrator/AppData/Local/Programs/Pyt
hon/Python38/python.exe d:/Code/Python/roop-main/run.py
Traceback (most recent call last):
File "d:/Code/Python/roop-main/run.py", line 2, in
from roop import core
File "d:\Code\Python\roop-main\roop\core.py", line 18, in
import onnxruntime
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-p
ackages\onnxruntime_init_.py", line 56, in
raise import_capi_exception
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-p
ackages\onnxruntime_init_.py", line 23, in
from onnxruntime.capi._pybind_state import ExecutionMode # noqa: F401
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-p
ackages\onnxruntime\capi_pybind_state.py", line 32, in
from .onnxruntime_pybind11_state import * # noqa
ImportError: DLL load failed while importing onnxruntime_pybind11_state: 找不到
指定的程序。
(venv) C:\temp\Tony\tonyff>python run.py
Frame processor frame_enhancer could not be loaded
(venv) C:\temp\Tony\tonyff>python -V
Python 3.11.0
(venv) C:\temp\Tony\tonyff>ver
Microsoft Windows [版本 10.0.19043.985]
--
After erase whole folder, and pull again, still showing the same message.
C:\temp\Tony\tonyff> python -v run.py 2> log.txt
as attachment
cmd:/content/roop/run.py --execution-provider cuda -s /content/roop_colab/1.jpg -t /content/roop_colab/2.mp4 -o /content/roop_colab/out.mp4 --frame-processor face_swapper face_enhancer --output-video-encoder libx264 --output-video-quality 35 --keep-fps --temp-frame-format jpg --temp-frame-quality 0
Traceback (most recent call last):
File "/content/roop/run.py", line 6, in
core.run()
File "/content/roop/roop/core.py", line 213, in run
if not frame_processor.pre_check():
File "/content/roop/roop/processors/frame/face_swapper.py", line 37, in pre_check
conditional_download(download_directory_path, ['https://huggingface.co/henryruhs/roop/resolve/main/inswapper_128.onnx'])
File "/content/roop/roop/utilities.py", line 142, in conditional_download
request = urllib.request.urlopen(url) # type: ignore[attr-defined]
File "/usr/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/usr/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 401: Unauthorized
1.如题colab受限于UI,没法像本地版本那样用上下键调节替换对象。
2.但图片中的人脸是有数字取值的,最右边的人脸是0,往左的脸依次加值。
3.可不可以在云端提供多图换多脸的选项呢?
4.感谢博主。
如题
没有生成out.mp4
cmd:run.py --execution-provider cuda -s /content/my/mm1.jpg -t /content/my/m_2.mp4 -o /content/my/out.mp4 --frame-processor face_swapper --video-encoder libx264 --video-quality 18 --keep-audio
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'device_id': '0', 'gpu_mem_limit': '18446744073709551615', 'gpu_external_alloc': '0', 'gpu_external_free': '0', 'gpu_external_empty_cache': '0', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'cudnn_conv1d_pad_to_nc1d': '0', 'arena_extend_strategy': 'kNextPowerOfTwo', 'do_copy_in_default_stream': '1', 'enable_cuda_graph': '0', 'cudnn_conv_use_max_workspace': '1', 'tunable_op_enable': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_tuning_enable': '0'}}
find model: /root/.insightface/models/buffalo_l/w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
100% 1594/1594 [00:08<00:00, 181.75it/s]
If you have figured it out, you should keep it to yourself - you don't know whether the person you are sharing it with is just a coomer or a psychopath who wants to ruin someone's life.
如果你已经搞清楚了,你应该自己保留 - 你不知道与你分享的人是一个仅仅是一个过度自慰者(Coomer)还是一个想要毁掉别人生活的精神变态。
Hello,
First of all I'd like to say that your work is great.
I'm using the "roop_v1.3" version on Google Colab using their GPU to generate images from images.
My only concern is the speed of the program: 30 seconds to generate an image, I think that's a lot compared to other deepfake (usually no more than 5 seconds for example sberswap).
Is it possible to make the tool faster? And if so, how?
Maybe for future versions.
Thanks
从12月以来 用GPU加速 后台的GPU RAM占用一直是0GB处理 不加速
目前看来只能合成7s左右的视频,模版视频有13s,7s之后就定格在某一帧上,不知道是使用姿势有问题,还是脚本本身做了限制。(环境:mbp 2.3 GHz 四核Intel Core i5 Intel Iris Plus Graphics 655 1536 MB)
这效果模糊不说。。。 头随便转个角度各种问题 。。。
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.