Giter Site home page Giter Site logo

lingjie0206 / neural_actor_main_code Goto Github PK

View Code? Open in Web Editor NEW
107.0 107.0 14.0 2.19 MB

Official repository of "Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control" (SIGGRAPH Asia 2021)

License: MIT License

Shell 0.67% Python 71.63% C++ 10.34% C 2.87% Cuda 14.49%

neural_actor_main_code's People

Contributors

lingjie0206 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

neural_actor_main_code's Issues

Data Download Request.

Hi. Thanks for contributing the code and data of your amazing work!

While I have some problem with the data. I have asked to registrate an account on the data page for a week, and used an institutional emali address as requested. Would you check the application please? Thanks a lot!

pip install error

Thank you for sharing code of your awesome work. This may be a dumb question but I encountered the the following error when doing pip install -r requirements.txt. (I changed git+https to git+ssh in requirements.txt since password authentication is no longer supported by github)

Collecting git+ssh://****@github.com/MultiPath/fairseq-stable.git (from -r requirements.txt (line 21)) Cloning ssh://****@github.com/MultiPath/fairseq-stable.git to /tmp/pip-req-build-d73g8hcw Running command git clone -q 'ssh://****@github.com/MultiPath/fairseq-stable.git' /tmp/pip-req-build-d73g8hcw Resolved ssh://****@github.com/MultiPath/fairseq-stable.git to commit 8aa06aa03b596de58d106d3f55ff43e2b9aa0b80 Running command git submodule update --init --recursive -q ERROR: Repository not found. fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists. fatal: clone of 'https://github.com/myleott/transformers.git' into submodule path '/tmp/pip-req-build-d73g8hcw/fairseq/models/huggingface/transformers' failed Failed to clone 'fairseq/models/huggingface/transformers'. Retry scheduled ERROR: Repository not found. fatal: Could not read from remote repository.

Please make sure you have the correct access rights and the repository exists. fatal: clone of 'https://github.com/myleott/transformers.git' into submodule path '/tmp/pip-req-build-d73g8hcw/fairseq/models/huggingface/transformers' failed Failed to clone 'fairseq/models/huggingface/transformers' a second time, aborting WARNING: Discarding git+ssh://****@github.com/MultiPath/fairseq-stable.git. Command errored out with exit status 1: git submodule update --init --recursive -q Check the logs for full command output. ERROR: Command errored out with exit status 1: git submodule update --init --recursive -q Check the logs for full command output.

Is there a work around to this problem?

I encountered an error while training the neural renderer

/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/criterions/rendering_loss.py:150: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
flatten_index = (flatten_uv[:,:,0] // h + flatten_uv[:,:,1] // w * W).long()
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [64,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [65,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [2,0,0], thread: [66,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
n/src/ATen/native/cuda/ScatterGatherKernel.cu:144: operator(): block: [0,0,0], thread: [63,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
Traceback (most recent call last):
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/train.py", line 32, in
cli_main()
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr_cli/train.py", line 378, in cli_main
main(args)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr_cli/train.py", line 107, in main
should_end_training = train(args, trainer, task, epoch_itr)
File "/home/lab3090/anaconda3/envs/neuralactor/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr_cli/train.py", line 184, in train
log_output = trainer.train_step(samples)
File "/home/lab3090/anaconda3/envs/neuralactor/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairseq-stable/fairseq/trainer.py", line 457, in train_step
raise e
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairseq-stable/fairseq/trainer.py", line 425, in train_step
loss, sample_size_i, logging_output = self.task.train_step(
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/tasks/neural_rendering.py", line 329, in train_step
return super().train_step(sample, model, criterion, optimizer, update_num, ignore_grad)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairseq-stable/fairseq/tasks/fairseq_task.py", line 351, in train_step
loss, sample_size, logging_output = criterion(model, sample)
File "/home/lab3090/anaconda3/envs/neuralactor/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/criterions/rendering_loss.py", line 48, in forward
loss, loss_output = self.compute_loss(model, net_output, sample, reduce=reduce)
File "/home/lab3090/D/zhujj/Neural_Actor_Main_Code-master/fairnr/criterions/rendering_loss.py", line 156, in compute_loss
target_colors = target_colors.gather(2, flatten_index.unsqueeze(-1).repeat(1,1,1,3))
RuntimeError: CUDA error: device-side assert triggered
May I ask how to solve it?

pynvml.nvml.NVMLError_Unknown

I met the following error when running inference, don't know how to fix it.

Traceback (most recent call last):
  File "inference.py", line 95, in <module>
    main()
  File "inference.py", line 40, in main
    set_affinity(args.local_rank)
  File "/mnt/c/Users/wongm/Desktop/Neural_Actor_Main_Code/imaginaire/imaginaire/utils/gpu_affinity.py", line 56, in set_affinity
    os.sched_setaffinity(0, dev.getCpuAffinity())
  File "/mnt/c/Users/wongm/Desktop/Neural_Actor_Main_Code/imaginaire/imaginaire/utils/gpu_affinity.py", line 37, in getCpuAffinity
    for j in pynvml.nvmlDeviceGetCpuAffinity(self.handle, device._nvml_affinity_elements):
  File "/home/finn/.local/lib/python3.6/site-packages/pynvml/nvml.py", line 1745, in nvmlDeviceGetCpuAffinity
    _nvmlCheckReturn(ret)
  File "/home/finn/.local/lib/python3.6/site-packages/pynvml/nvml.py", line 765, in _nvmlCheckReturn
    raise NVMLError(ret)
pynvml.nvml.NVMLError_Unknown: Unknown Error

I encountered a problem in the second step

I use single-gpu to inference, but I made an error like this:
Traceback (most recent call last):
File "imaginaire/inference.py", line 95, in
main()
File "imaginaire/inference.py", line 86, in main
trainer.load_checkpoint(cfg, args.checkpoint)
File "/home/mio/work/project/Neural_Actor_Main_Code-master/imaginaire/imaginaire/trainers/base.py", line 259, in load_checkpoint
self.net_G.load_state_dict(checkpoint['net_G'])
File "/home/mio/anaconda3/envs/neuralactor/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1667, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for WrappedModel:
Missing key(s) in state_dict: "module.averaged_model.label_embedding.conv_first.layers.conv.weight_orig", "module.averaged_model.label_embedding.conv_first.layers.conv.weight_u", "module.averaged_model.label_embedding.down_0.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_0.layers.conv.weight_u", "module.averaged_model.label_embedding.down_1.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_1.layers.conv.weight_u", "module.averaged_model.label_embedding.down_2.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_2.layers.conv.weight_u", "module.averaged_model.label_embedding.down_3.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_3.layers.conv.weight_u", "module.averaged_model.label_embedding.down_4.layers.conv.weight_orig", "module.averaged_model.label_embedding.down_4.layers.conv.weight_u", "module.averaged_model.label_embedding.up_4.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_4.layers.conv.weight_u", "module.averaged_model.label_embedding.up_3.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_3.layers.conv.weight_u", "module.averaged_model.label_embedding.up_2.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_2.layers.conv.weight_u", "module.averaged_model.label_embedding.up_1.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_1.layers.conv.weight_u", "module.averaged_model.label_embedding.up_0.layers.conv.weight_orig", "module.averaged_model.label_embedding.up_0.layers.conv.weight_u", "module.averaged_model.up_7.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_7.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_7.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_7.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_6.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_6.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_6.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_6.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_5.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_5.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_5.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_5.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_4.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_4.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_4.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_4.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_4.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_4.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_3.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_3.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_3.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_3.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_2.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_2.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_2.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_2.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_1.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_1.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_1.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_1.conv_block_s.layers.conv.weight_u", "module.averaged_model.up_0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.up_0.conv_block_0.layers.conv.weight_u", "module.averaged_model.up_0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.up_0.conv_block_1.layers.conv.weight_u", "module.averaged_model.up_0.conv_block_s.layers.conv.weight_orig", "module.averaged_model.up_0.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_0.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_0.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_0.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_0.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_1.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_1.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_1.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_1.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_2.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_2.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_2.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_2.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_3.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_3.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_3.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_3.conv_block_s.layers.conv.weight_u", "module.averaged_model.down_4.conv_block_0.layers.conv.weight_orig", "module.averaged_model.down_4.conv_block_0.layers.conv.weight_u", "module.averaged_model.down_4.conv_block_1.layers.conv.weight_orig", "module.averaged_model.down_4.conv_block_1.layers.conv.weight_u", "module.averaged_model.down_4.conv_block_s.layers.conv.weight_orig", "module.averaged_model.down_4.conv_block_s.layers.conv.weight_u", "module.averaged_model.res_0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_0.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_0.conv_block_1.layers.conv.weight_u", "module.averaged_model.res_1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_1.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_1.conv_block_1.layers.conv.weight_u", "module.averaged_model.res_2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_2.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_2.conv_block_1.layers.conv.weight_u", "module.averaged_model.res_3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.res_3.conv_block_0.layers.conv.weight_u", "module.averaged_model.res_3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.res_3.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.2.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.2.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_lbl.3.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_lbl.3.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.2.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.2.layers.conv.weight_u", "module.averaged_model.flow_network_temp.down_img.3.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.down_img.3.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.0.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.1.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.2.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.3.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.4.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_0.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_0.layers.conv.weight_u", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.res_flow.5.conv_block_1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.up_flow.1.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.up_flow.1.layers.conv.weight_u", "module.averaged_model.flow_network_temp.up_flow.3.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.up_flow.3.layers.conv.weight_u", "module.averaged_model.flow_network_temp.up_flow.5.layers.conv.weight_orig", "module.averaged_model.flow_network_temp.up_flow.5.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.conv_first.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.conv_first.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_0.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_0.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_1.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_1.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_2.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_2.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_3.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_3.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.down_4.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.down_4.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_4.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_4.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_3.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_3.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_2.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_2.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_1.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_1.layers.conv.weight_u", "module.averaged_model.img_prev_embedding.up_0.layers.conv.weight_orig", "module.averaged_model.img_prev_embedding.up_0.layers.conv.weight_u".

Is there a problem with the model? How can I solve it?

Some question about pre-processing my own datasets.

On this page

in generate_transform_files/hello_smpl_pose.py line48 https://github.com/lingjie0206/Neural_Actor_Preprocessing/blob/e48dd8824b88c95191c483f7316d3cfb170b0a64/generate_transform_files/hello_smpl_pose.py#L48

https://github.com/lingjie0206/Neural_Actor_Preprocessing/blob/e48dd8824b88c95191c483f7316d3cfb170b0a64/generate_transform_files/hello_smpl_pose.py#L8

in smpl_webuser.serialization.py , load_model only return the one variable (result),
but there use three variable to catch the return value of load_model.

I know the m is the info about smpl model , and dd means some info from smpl_webuser.serialization.py ,
but only A i don't konw what is it means.
Could you please explain this variable for me? Thanks a lout!!

Generalization of anime characters

thanks for your wonderful work!
it is possible to replace real person images with anime characters, then generate anime characters to match the given pose ?
look forward to your reply!

Licence?

Hi - the License file in the code says the code's under an MIT licence, whereas the text on the ReadMe says it's under a CC-by-NC license, one key difference being around it being usable by a commercial entity. Could you clarify the situation please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.