Giter Site home page Giter Site logo

Comments (29)

HashedViking avatar HashedViking commented on May 25, 2024 2

Let me share with you the complete "tutorial" how to make it working from the ground on Windows 10 (same for MacOS). Thanks to @chimaerra's comment for a good foundation.
Install Git
Install Python 3.6.0 (Anaconda installs it for you)
Install Anaconda
Download Checkpoint

Open Anaconda Prompt (Start Menu)

I've created 2 files for convenience:

fom_installation.txt - manage all required installations
preprocess.txt - prepare source video (fix bugs, crop, cut), and finally run demo.py

fom_installation.txt

mkdir FOM
cd FOM
conda create -n FOM python=3.6
conda activate FOM

git clone https://github.com/AliaksandrSiarohin/first-order-model
pip install -r requirements.txt
conda install -c pytorch pytorch
conda install ffmpeg -c conda-forge
pip install torchvision==0.2.1 tqdm==4.24.0
pip install pyyaml sklearn pandas
pip install matplotlib==2.2.2
pip install imageio==2.3.0
pip install --upgrade scikit-image
pip install imageio-ffmpeg
pip install scikit-video
pip install autocrop

git clone https://github.com/1adrianb/face-alignment
cd face-alignment
pip install -r requirements.txt
python setup.py install

#197
preprocess.txt

# fixes Video Frames Error for some reason mentioned here #197
ffmpeg -i driving_video/driving.mp4 driving_video/driving-fixed.mp4

# generates ffmpeg command below
python crop-video.py --inp driving_video/driving-fixed.mp4

# crop face from initial video
# this command is the output of the previous one, you should generate your own
ffmpeg -i driving_video/driving-fixed.mp4 -ss 0.0 -t 55.416666666666664 -filter:v "crop=145:145:227:33, scale=256:256" crop.mp4

# crop the source image
autocrop -i ./source_image --facePercent 1

# cut video to first 20 seconds to reduce the time of result generation
ffmpeg -ss 0 -i crop.mp4 -c copy -t 20 cut.mp4

autocrop -i ./source_image --facePercent 1
mv cut.mp4 ./driving_video

#finally generate the result
python demo.py  --config ./config/vox-adv-256.yaml --driving_video ./driving_video/cut.mp4 --source_image ./source_image/source.png --checkpoint ./fom_checkpoints/vox-adv-cpk.pth.tar --relative --adapt_scale

Every command should be run inside one environment, called FOM in our case
fom_installation.txt is pretty straightforward, just install everything and clone face-alignment repo inside first-order-model repo. Sometimes pip can fail, so this might fix things pip cache purge.
preprocess.txt - read the comments

#107
Change crop-video.py and demo.py

In demo.py:
change

reader = imageio.get_reader(opt.driving_video)
fps = reader.get_meta_data()['fps']

to

import skvideo.io  
driving_video = skvideo.io.vread("")
fps = 60

In crop-video.py:
change

video = imageio.get_reader(args.inp)
fps = video.get_meta_data()['fps']

to

import skvideo.io
video = skvideo.io.vread(args.inp)
fps = 60

Finally run demo.py
python demo.py --config ./config/vox-adv-256.yaml --driving_video ./driving_video/cut.mp4 --source_image ./source_image/source.png --checkpoint ./fom_checkpoints/vox-adv-cpk.pth.tar --relative --adapt_scale

from first-order-model.

aphix avatar aphix commented on May 25, 2024 1

@simplyexperimental Thanks to your comment I did a bit more digging and discovered I could install the remaining requirements (both torch 1.4.0 and torch 1.0.0, as well as torchvision 0.2.1)!

Findings: Didn't work on Python 3.8.2 -- I'm not quite sure why (both unsupported wheel errors and no-matching candidates), but worked fine with 3.7.7.

This was the final command that got the installs to work (can just change version for various torch options):
pip3.7 install torch==1.0.0 torchvision==0.2.1 -f https://download.pytorch.org/whl/torch_stable.html

I'll follow up over the next couple days when I get some time to play with it to see if it can run correctly, but at least the requirements/prereqs appear good to go.

Thanks again!

For sake of clarity, I've attached the requirements.txt, and was also able to install from there with the following command:

pip3.7 install -r requirements.txt

Note: Change torch's version to 1.0.0 for training as noted in the README

requirements.txt

from first-order-model.

HashedViking avatar HashedViking commented on May 25, 2024 1

Initial driving video
https://user-images.githubusercontent.com/6432677/111039406-fb118400-843e-11eb-8899-123204008f54.mp4

Source Image
https://user-images.githubusercontent.com/6432677/111039409-ff3da180-843e-11eb-9d9c-69d29190a7e3.png

Result
https://user-images.githubusercontent.com/6432677/111039423-0ebcea80-843f-11eb-8dc6-721c36392053.mp4

from first-order-model.

ZeroCool940711 avatar ZeroCool940711 commented on May 25, 2024

Here is one problem I found when trying to train a model on Windows 10 , it seems like the run.py script cant create the log folders as they have : in the folder name which is not allowed on Windows, it fails the training as soon as it starts because it can't create a folder with that name.

python run.py --config config/test.yaml --device_ids 0,1
run.py:36: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  config = yaml.load(f)
Use predefined train-test split.
Traceback (most recent call last):
  File "run.py", line 71, in <module>
    os.makedirs(log_dir)
  File "D:\Python\lib\os.py", line 221, in makedirs
    mkdir(name, mode)
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'log\\test 29-03-20 19:54:36'

from first-order-model.

forresthopkinsa avatar forresthopkinsa commented on May 25, 2024

After messing with this for hours, I have it working.

  1. I did this installation using the Windows Store Python 3.8 distribution

  2. You might have to install the Visual C++ Build Tools, I had to do that at one point but I don't know if it was really needed in the end

  3. You need to do a registry edit or else you'll hit the Windows max path length while installing the dependencies: link

  4. Use this requirements.txt instead of the original one:

cffi #==1.11.5
cloudpickle==0.5.3
cycler==0.10.0
dask==0.18.2
decorator==4.3.0
imageio #==2.3.0
imageio-ffmpeg
kiwisolver==1.0.1
matplotlib #==2.2.2
networkx==2.1
numpy #==1.15.0
pandas==0.23.4
Pillow #==5.2.0
pycparser==2.18
pygit==0.1
pyparsing==2.2.0
python-dateutil==2.7.3
pytz==2018.5
PyWavelets #==0.5.2
PyYAML==3.13
scikit-image #==0.14.0
scikit-learn==0.19.2
scipy #==1.1.0
six==1.11.0
toolz==0.9.0
# torch==1.0.0
torchvision #==0.2.1
tqdm==4.24.0
# https://download.pytorch.org/whl/cu100/torch-1.0.0-cp37-cp37m-win_amd64.whl
https://download.pytorch.org/whl/cu101/torch-1.4.0-cp38-cp38-win_amd64.whl

This could be optimized and cleaned up quite a lot but it works so I'm satisfied.

from first-order-model.

ZeroCool940711 avatar ZeroCool940711 commented on May 25, 2024

@forresthopkinsa have you tried training on Windows?
Did it work for you?
I've tried multiple times and also tried to modify the code to see if I could make it run the training but every time I get an error similar, seems like we have to change imageio.mimread() and use instead imageio.get_reader() to bypass a memory problem which is what im getting but I cant make it work, im not sure how to use imageio.get_reader(), this is the error im getting, its always something related to the memory, I know my GPUs are not the best one but they do have more than 256mb of VRAM which is what im seeing on the errors, I tried removing the memtest checking setting it to memtest=False but it didnt work either, seems like the only solution is either get a better GPU or find a way to use imageio.get_reader() which I think reads the video frame by frame in pieces to avoid using too much VRAM.

python run.py --config config/test.yaml --device_ids 0
Use predefined train-test split.
Training...
  0%|                                                                                          | 0/100 [00:00<?, ?it/s]Traceback (most recent call last):
  File "run.py", line 77, in <module>
    train(config, generator, discriminator, kp_detector, opt.checkpoint, log_dir, dataset, opt.device_ids)
  File "D:\Projects\Python\Others\Machine Learning\DeepFakes\first-order-model-master\train.py", line 50, in train
    for x in dataloader:
  File "D:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 637, in __next__
    return self._process_next_batch(batch)
  File "D:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 658, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
  File "D:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 138, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "D:\Python\lib\site-packages\torch\utils\data\dataloader.py", line 138, in <listcomp>
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "D:\Projects\Python\Others\Machine Learning\DeepFakes\first-order-model-master\frames_dataset.py", line 154, in __getitem__
    return self.dataset[idx % self.dataset.__len__()]
  File "D:\Projects\Python\Others\Machine Learning\DeepFakes\first-order-model-master\frames_dataset.py", line 116, in __getitem__
    video_array = read_video(path, frame_shape=self.frame_shape)
  File "D:\Projects\Python\Others\Machine Learning\DeepFakes\first-order-model-master\frames_dataset.py", line 43, in read_video
    video = np.array(mimread(name))
  File "D:\Python\lib\site-packages\imageio\core\functions.py", line 370, in mimread
    int(nbyte_limit)
RuntimeError: imageio.mimread() has read over 256000000B of image data.
Stopped to avoid memory problems. Use imageio.get_reader(), increase threshold, or memtest=False

from first-order-model.

forresthopkinsa avatar forresthopkinsa commented on May 25, 2024

No, sorry, that was just to get the demo running. Should've mentioned that.

from first-order-model.

ZeroCool940711 avatar ZeroCool940711 commented on May 25, 2024

@forresthopkinsa , ok, if you have a chance to try training and let me know if it works for you I would really appreciate it, and BTW, I didnt mention it before but I managed to fix the log folder error I was getting on this comment. I created a PR for that and it was merged correctly so this should be fixed for everyone else running the code on Windows, the problem was with some special characters on the folder naming that were not allowed on Windows so the folders were not been created. I replaced the special characters with something that is allowed on Windows and Linux so it should work perfectly on both OS now, at least for that part.

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

Hey @ZeroCool940711 I am joining this bandwagon with ya I have gotten to the point you have and did solve the error about memory and I am stuck on the cuda errors I can't seem to get this thing mapped to my CPU using this I am a noob at python:
Your Memory solution:
driving_video = imageio.mimread('mycontent/driving_video.mp4', memtest=False)
That's how that line should be.
What you may face next (but you might not let me know)
My problem:
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, location: 'cpu')
Apparently I have no Cuda since I don't use Nvidia and I don't know python so I am to map the 700mb checkpoint file vox-cpk.pth and am failing miserably....any help would be appreciated.
Some people are suggesting torch.load('tensors.pt', map_location=lambda storage, loc: storage)
but the previous code used the checkpoint as a pointer so I don't think I can put vox-cpk.pth.pt right?

I got to that point by commenting the 4 lines that have cuda commands completely #....
so line 29 is #generator() and line 33 #kp_detector() same with 51 and 52 completely commented.

from first-order-model.

ZeroCool940711 avatar ZeroCool940711 commented on May 25, 2024

@simplyexperimental im going to see if I can find a way to easily switch from GPU to CPU as I will try to run the code on WSL (Windows Subsystem for Linux), I was not able to run train on Windows 10 so I will try to run it on WSL which has no GPU support and will run entirely on CPU, probably it wont be something I'll be fixing today but I will keep you updated if I can make it work, no promise tho, it might work, it might not work.

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

Yeah no worries I updated my last comment to add a bit more to the bottom. It shouldn't be this hard haha it's too bad I am not familiar with all of this because it is so exciting I come from a 3d artist/game programmer background and I was hooked when I went to thispersondoesnotexist.com to generate fake people and then if we get this working I can animate them to so it's pretty crazy!

from first-order-model.

ZeroCool940711 avatar ZeroCool940711 commented on May 25, 2024

I understand you, maybe is not exactly the same thing but I plan to use this for game development to create better faces and animations for the 3d models on the game, technically I will be transferring motion and faces from real people into 3d models to reduce the amount of work needed for some stuff like some scenes on the game but I still have to make the code work on Windows correctly so, we'll see.

from first-order-model.

ZeroCool940711 avatar ZeroCool940711 commented on May 25, 2024

@simplyexperimental I tried running it on CPU and on WSL it works on the CPU but its really, really slow, something that on GPU takes 4 seconds on CPU takes more than 1 hour, that should give you an idea of how slow it is, on Windows it doesn't work with the CPU no matter what I tried to do, maybe tomorrow I will try again but I'm pretty sure it will be the same result, even on GPU its giving me troubles running on Windows 10 so using the CPU will be another layer of troubles on top of the existing ones.

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

Aww that's disappointing. I know there are many other plugins that can do a similar thing. Is this plugin really the leading edge or are there better ones out there for face animation transfer? Or maybe I should just upgrade my GPU to Nvidia xd.

from first-order-model.

ZeroCool940711 avatar ZeroCool940711 commented on May 25, 2024

@simplyexperimental sorry for taking so long to answer, even with a good GPU on windows it doesn't work and for what I see you need at least a GPU with 11GB VRAM or more, probably the recommended to run this is at least one or two Titan V or Titan RTX so, be prepared to spend a lot of money on it, there is DeepFakeLab that does the face part of what this code here does but it requires a few more steps but it works, it doesnt work with full body movement but works for the face, you will also need a GPU but I think it works with any GPU not only Nvidia.

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

ZeroCool I may have got it working on windows but I get a memory error since I only have a 8GB Ram machine with a 1 video graphics card. Please take this and replace the demo.py with my one and make sure to rename it back from .txt to .py haha.
Then run this code in the console but before you do make sure the driving video name and image source and folders match the ones I have put or change them to match yours.
python demo.py --config config/vox-256.yaml --driving_video mycontent/driving_video.mp4 --source_image mycontent/myimage.png --checkpoint checkpoints/vox-cpk.pth.tar --relative --adapt_scale
demo.txt
so good luck and let me know the results!
You can examine my demo code to see if it's all correct. Right now I did comment out two lines but that didn't seem to cause any issues or errors-but if you know better tweak it and see what happens. For now though I assume you can just run it as it is and get past the memory error hopefully to the other side with a working thing!

from first-order-model.

aphix avatar aphix commented on May 25, 2024

I'm attempting to get it working on Win7 and hit both a libpng and freetype issue, but still had no luck after manually installing.

Following the update to the requirements file provided by @forresthopkinsa (thanks!) I hit the following error (guessing it's Win7 specific):

ERROR: torch-1.4.0-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.

Would love to help with a clear/simplified walkthrough for various Windows versions, not limited to 7, although it would be nice (and in my experience has been only superficially different from 8/10, from a development perspective).

from first-order-model.

forresthopkinsa avatar forresthopkinsa commented on May 25, 2024

Windows 7? That's tricky. The python distribution I used was the one from the Windows 10 store. Are you using Python 3.8 and is your machine 64-bit?

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

I got it running on windows 7 CPU and it only takes about 20 minutes to process a 20 sec video not bad but you'll need to use a different command to install the right version of torch and you don't need to type the whole wheel file specifics just type the right command in python to install torch 1.4.0 and it will find right version for you. I found the right command via a google search I did it a while back so I don't recall all the steps. If I recall correctly you can use this command but I'm not sure:
pip install torch==1.0.0 torchvision==0.2.1 -f https://download.pytorch.org/whl/torch_stable.html

from first-order-model.

aphix avatar aphix commented on May 25, 2024

Windows 7? That's tricky. The python distribution I used was the one from the Windows 10 store. Are you using Python 3.8 and is your machine 64-bit?

@forresthopkinsa Just saw your questions, sorry for the delay.

To confirm: Yes it's 64 bit and I've got Python 3.8.2 installed and in PATH.

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

@aphix You're welcome, just make sure you have the latest version of the github repository there were some changes made to Help CPU users like myself but if you have Nvidia card then you can remove the --cpu line. Enter the below lines in the command prompt and you should be good to go if the names match up.
cd first-order-model-master/
python demo.py --cpu --config config/vox-256.yaml --driving_video mycontent/driving_video.mp4 --source_image mycontent/myimage.png --checkpoint checkpoints/vox-cpk.pth.tar --relative --adapt_scale
If you get a memory error just close all other apps.
*The code I pasted may have spacing issues just make sure you check it.

from first-order-model.

elTRexx avatar elTRexx commented on May 25, 2024

Hi.
Maybe the place for me to get help is here ?
I posted this yesterday : #126

Basically I don't have good experience using powershell (or cmd or cygwin), though I have enough computer science background to understand it once explained.

So I tried to sort things out myself, by trial and error basically plus my past knowledge.

After a week I decided to ask for help cause it seems I don't really make progress anymore.

So what I've done his download from here the .zip github file on my SSD.

Then read the README.md and trying to follow the process.

I'll pass on several trouble I had installing all package (from python to pytorch, even installed whole NVidia CUDA tools).

I downloaded this youtube video my self : https://www.youtube.com/watch?v=xRrrfSIgfKI.
Then cut it so I have only the last 4 sec (I think it would be easier since from what I "debbuged", on the whole video the face-aligment detect up to 3 face at the beginning of the video. (bboxes). So cut the last sec only "leonidas" face is recognized. I named it 300lite.mp4

First thing I suspect not working is the crop_video.py, since the 300lite.mp4 didn't change a bit.

Adding some print() in the .py script is where I found that the bboxes are correctly recognized, but the result commands[] list is still empty :
The modified script : https://pastebin.com/wtxAZmRz
The output : https://pastebin.com/5a26cy6g

I downloaded also the necessary input for the demo.py script :
E:\first-order-model-master\checkpoints\vox-cpk.pth.tar
E:\first-order-model-master\config\vox-256.yaml
E:\first-order-model-master\Inputs\Source Image\03.png (the Theresa May face)

Then I did this command :
PS E:\first-order-model-master> python demo.py --config .\config\vox-256.yaml --driving_video '.\Inputs\Drive Videos\300lite.mp4' --source_image '.\Inputs\Source Image\03.png' --checkpoint .\checkpoints\vox-cpk.pth.tar --relative --adapt_scale

I modified the subprocess.py script to trace the process call argument like this :
# Start the process
try:
#/!\ Change : print()
print("the args are : exe :",executable,", args:",args,", env:",env," , startupinfo:",startupinfo)
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
# no special security
None, None,
int(not close_fds),
creationflags,
env,
os.fspath(cwd) if cwd is not None else None,
startupinfo)

The resulted PS output with this print(...) added is : https://pastebin.com/b4t3Bcwa
result

And the result ugly video (converted from .mp4 to .gif here) : result.gif

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

@elTRexx Hmm I never needed to change the subprocess.py to get mine to work.
Anyway the point is you didn't get any errors so that's the point.
My advice is you need a better video but to rule everything out first use this video, It's a mp4 cleverly disquised as a jpg to allow it to upload so just rename it to .mp4 and watch it, use it as the driving video and we will see your results.
driving_video

Remember that the Ai doesn't really like dark scenes and your video is very dark probably confusing the AI also the AI only likes really close zoom of the persons face the whole time and doesn't like when that zoom changes. It prefers a constanct zoom and space and can handle slight movements and rotations but only minor amounts. The focus of the person should be as close as you can get only around the upper shoulders to the top of the head with little space on the left and right, basically exactly like the sample video I sent you. The AI cannot handle big rotation or objects blocking the face like hand movements over face etc. Just use my video and see if you get decent results, if you do then it means your driving video wasn't appropriate.

from first-order-model.

elTRexx avatar elTRexx commented on May 25, 2024

@elTRexx Hmm I never needed to change the subprocess.py to get mine to work.
Anyway the point is you didn't get any errors so that's the point.
My advice is you need a better video but to rule everything out first use this video, It's a mp4 cleverly disquised as a jpg to allow it to upload so just rename it to .mp4 and watch it, use it as the driving video and we will see your results.
driving_video

Remember that the Ai doesn't really like dark scenes and your video is very dark probably confusing the AI also the AI only likes really close zoom of the persons face the whole time and doesn't like when that zoom changes. It prefers a constanct zoom and space and can handle slight movements and rotations but only minor amounts. The focus of the person should be as close as you can get only around the upper shoulders to the top of the head with little space on the left and right, basically exactly like the sample video I sent you. The AI cannot handle big rotation or objects blocking the face like hand movements over face etc. Just use my video and see if you get decent results, if you do then it means your driving video wasn't appropriate.

I have modified several script 2 ways :

  1. adding print() for debbuging : script behavior wasn't altered
  2. modifying accordingly to the warning messages I got from the nn.functional.py script inside torch.
    #ADDED
    recompute_scale_factor=True
    if is_float_scale_factor:
    warnings.warn("The default behavior for interpolate/upsample with float scale_factor will change "
    "in 1.6.0 to align with other frameworks/libraries, and use scale_factor directly, "
    "instead of relying on the computed output size. "
    "If you wish to keep the old behavior, please set recompute_scale_factor=True. "
    "See the documentation of nn.Upsample for details. ")

and

def sigmoid(input):
r"""sigmoid(input) -> Tensor

Applies the element-wise function :math:\text{Sigmoid}(x) = \frac{1}{1 + \exp(-x)}``

See :class:~torch.nn.Sigmoid for more details.
"""
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
return torch.sigmoid(input)
#return input.sigmoid()

and

if align_corners is None:
warnings.warn("Default upsampling behavior when mode={} is changed "
"to align_corners=False since 0.4.0. Please specify "
"align_corners=True if the old behavior is desired. "
"See the documentation of nn.Upsample for details.".format(mode))
#align_corners = False
align_corners = True

and

if align_corners is None:
warnings.warn("Default grid_sample and affine_grid behavior has changed "
"to align_corners=False since 1.3.0. Please specify "
"align_corners=True if the old behavior is desired. "
"See the documentation of grid_sample for details.")
#align_corners = False
align_corners = True

That's all.

There are 2 points I'm skeptical about :

  1. The crop_video.py seems to do nothing on my video.
  2. The demo.py 1st pass computing on the video don't start (keep blocked at 0% like displayed on the output)

Meanwhile I tested your video, renamed it easy.mp4, and it seems to work way better !

result_easy

from first-order-model.

elTRexx avatar elTRexx commented on May 25, 2024

So I tried a selfie video of me, close up, cropped down at 256*256, :
me

and used this photo of Richard Dean Anderson :
test2

But the result isn't what I expected ! Look like an elder RDA !
resultRDA

from first-order-model.

AliaksandrSiarohin avatar AliaksandrSiarohin commented on May 25, 2024

Here I guess the problem is that you are using relative animation, and source and driving not aligned in expression. In other words you selfie should start with closed mouth, otherwise use find_best_frame option.

from first-order-model.

simplyexperimental avatar simplyexperimental commented on May 25, 2024

@elTRexx Yes I experimented a lot to get the right video. It's good to know that you are not getting any errors. I don't use the auto-crop stuff I just do it manually.
I'm glad my video worked, your newest result video is pretty good!
*I see you've made a few script changes to get it to work but they seem to be utility changes, I didn't make a custom script to get mine working I just ran the phython demo script and it worked.
For improving your end result video you can see the mouth flaps are very big that's because there is a perspective difference in your reference video and the picture zoom-try again utilizing these new tips:

-Something is not right about your selfie video I think you came too close to the camera and that's why you're face looks spherical instead of being flat and uniform like the reference photo. So record the selfie again but step back a bit more so you're face dimensions and zoom match the reference picture more and try again so you don't confuse the AI with a distorted selfie-I mean the gap between your eyes from your selfie video are way more than the reference picture because you're too close to the camera or your camera is set up weird causing distortions.

-As Aliak said the first frame of your selfie mouth position and head rotation should match the reference photo as well as you can.
Do those two and post the results, if you see improvement then you got it figured out.

from first-order-model.

cagneymoreau avatar cagneymoreau commented on May 25, 2024

Its really finicky on windows. About 30 days ago I struggled to install it but got it working. Now for some reason it gets stuck with no error message. I believe I made some small change to my python setup since then but I cant remember. yay

from first-order-model.

chimaerra avatar chimaerra commented on May 25, 2024

All issues were solved by following this YT tutorial: https://www.youtube.com/watch?v=ICvNnrWKHmc
after following this, I had to manually update scikit-image even after a succesful installation from requirements: pip install --upgrade scikit-image

Using Windows 10 1909. Python 3.6.0 - Python 3.8.2 doesn't seem to be working, it gives different errors.

So, the text instructions in my case are:

Install Git(of course)
Install Python 3.6.0
Install Anaconda
Open Anaconda Prompt (Start Menu)

mkdir FOM
cd FOM
conda create -n FOM
conda activate FOM

git clone https://github.com/AliaksandrSiarohin/first-order-model
pip install -r requirements.txt
conda install -c pytorch pytorch
pip install torchvision==0.2.1 tqdm==4.24.0 pyyaml sklearn pandas
pip install matplotlib==2.2.2
pip install imageio==2.3.0
pip install --upgrade scikit-image

Don't ask me why these 3 last commands, I don't have any clue, I just had to install them manually.

I hope it will help someone.

from first-order-model.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.