Giter Site home page Giter Site logo

qianlim / cape Goto Github PK

View Code? Open in Web Editor NEW
308.0 14.0 39.0 9.15 MB

Official implementation of CVPR2020 paper "Learning to Dress 3D People in Generative Clothing" https://arxiv.org/abs/1907.13615

License: Other

Python 100.00%
smpl-model smpl-body mesh-generation vae-gan graph-convolutional-networks clothing cvpr2020 cvpr cvpr-2020

cape's Introduction

CAPE: Clothed Auto-Person Encoding (CVPR 2020)

Paper Open In Colab

Tensorflow (1.13) implementation of the CAPE model, a Mesh-CVAE with a mesh patch discriminator, for dressing SMPL bodies with pose-dependent clothing, introduced in the CVPR 2020 paper:

Learning to Dress 3D People in Generative Clothing
Qianli Ma, Jinlong Yang, Anurag Ranjan, Sergi Pujades, Gerard Pons-Moll, Siyu Tang, and Michael. J. Black
Full paper | Paper in 1 min | Paper in 4 min | New dataset | Project website

Google Colab demo

In case you do not have a suitable GPU environment to run the CAPE code, we offer a demo on Google Colab. It generates and visualizes the 3D geometry of the clothed SMPL meshes: Open In Colab

For the full demo and training, follow the next steps.

Installation

We recommend creating a new virtual environment for a clean installation of the dependencies. All following commands are assumed to be executed within this virtual environment. The code has been tested on Ubuntu 18.04, python 3.6 and CUDA 10.0.

python3 -m venv $HOME/.virtualenvs/cape
source $HOME/.virtualenvs/cape/bin/activate
pip install -U pip setuptools
  • Install PSBody Mesh package. Currently we recommend installing version 0.3.
  • pip install -r requirements.txt
  • Download the SMPL body model (Note: use the version 1.0.0 with 10 shape PCs), and place the .pkl files for both genders and put them in /body_models/smpl/. Follow the instructions to remove the Chumpy objects from both model pkls.
  • pip install numpy==1.16.2 (do this at last to ensure numpy==1.16.2).

Quick demo

  • Download the SMPL body model as described above.
  • cd CAPE && mkdir checkpoints
  • Download our pre-trained demo model and put the downloaded folder under the checkpoints folder. Then run:
python main.py --config configs/CAPE-affineconv_nz64_pose32_clotype32_male.yaml --mode demo

It will generate a few clothed body meshes in the results/ folder and show on-screen visualization.

Process data, training and evaluation

Prepare training data

Here we assume that the CAPE dataset is downloaded. The "raw" data are stored as an .npz file per frame. We are going to pack these data into dataset(s) that can be used to train the network. For example, the following command

python lib/prep_data.py <path_to_downloaded_CAPE_dataset> --ds_name dataset_male_4clotypes --phase both

will create a dataset named dataset_male_4clotypes, both the training and test splits, under data/datasets/dataset_male_4clotypes. This dataset contains 31036 training and 5128 test examples. Similarly, setting --ds_name dataset_female_4clotypes will create the female dataset that contains 21090 training and 5441 test examples.

To customize the packed dataset, simply edit the dataset_config_dicts defined in data/dataset_configs.py for your subject / clothing type / sequences of interest.

Training

Once the dataset is packed, we are ready for training! Give a name to the experiment (will be used to save / load checkpoints and for Tensorboard etc.), specify the genders (here assume we train a male model), and run:

python main.py --config configs/config.yaml --name <exp_name> --mode train

The training will start. You can watch the training process by running Tensorboard in summaries/<exp_name>. At the end of training it will automatically evaluate on the test split and run the generation demos.

To customize the architecture and training, check the arguments defined in config_parser.py, and set them either in configs/config.yaml or directly in command line by--[some argument] <value>.

Evaluation

Change the --mode flag to demo to run the auto-encoding evaluation. It will also run the generation demos.

python main.py --config configs/config.yaml --name <exp_name> --gender <gender> --mode demo

Performance

The public release of the CAPE dataset slightly differs from what we used in the paper due to the removal of faulty / corrupted frames. Therefore we retrained our model on the dataset_male_4clotypes and dataset_female_4clotypes datasets packed as shown above, and report the performance in terms of per-vertex auto-encoding Eucledian errors (in mm).

On male dataset:

Method PCA CoMA-4* CoMA-1* CAPE CAPE-affine_conv**
error mean 7.13 ± 5.27 7.32 ± 5.57 6.50 ± 5.35 6.15 ± 5.30 6.03 ± 5.18
medians 5.79 5.85 5.02 4.64 4.54

On female dataset:

Method PCA CoMA-4 CoMA-1 CAPE CAPE-affine_conv
error mean 3.87 ± 3.02 4.38 ± 3.33 3.86 ± 3.09 3.61 ± 3.01 3.58 ± 2.94
medians 3.10 3.55 3.07 2.82 2.82

* CoMA-X stands for the model by Ranjan et al. with a spatial downsampling rate X at each downsample layer.
** CAPE-affine_conv uses an improved mesh-residual block based on the idea of this CVPR2020 paper, instead of our original mesh residual block. It achieves improved results and is faster in training. To use this layer, use the flag --affine 1 in training.

Miscellaneous notes on training

The latent space dimension used in the paper and numbers above (set by flag --nz) is 18 to balance the size of the model and performance, but at a price of losing some clothing details. Increasing the latent dimension brings significant better wrinkles and edges. We also provide the model checkpoints trained with --nz 64 --nz_cond 32 --nz_cond2 32 --affine 1, see below.

Pretrained models

Our pretrained models on the above two datasets can be downloaded here. Their corresponding configuration yaml files are already in configs/ folder, with the same name as the name of each checkpoint folder.

To run evaluation / demo from the pretrained models, put the downloaded folder(s) under the checkpoints folder, and run the evaluation command, e. g.:

python main.py --config configs/CAPE-affineconv_nz18_pose24_clotype8_male.yaml --mode demo

CAPE dataset

Together with the model, we introduce the new CAPE dataset, a large scale 3D mesh dataset of clothed humans in motion. It contains 150K dynamnic clothed human mesh registrations from real scan data, with consistent topology. We also provide precise body shape under clothing, SMPL pose parameters and clothing displacements for all data frames, as well as handy code to process the data. Check it out at our project website!

News

05/08/2020 A Google Colab demo is added!

28/07/2020 Data packing and training scripts added! Also added a few new features. Check the changelog for more details.

26/07/2020 Updated the link to the pretrained checkpoint (previous one was faulty and generates weird shapes); minor bug fixes in the group norm param loading.

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the CAPE data and software, (the "Dataset & Software"), including 3D meshes, pose parameters, scripts, and animations. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

The SMPL body related files data/{template_mesh.obj, edges_smpl.npy} are subject to the license of the SMPL model. The PSBody mesh package and smplx python package are subject to their own licenses.

Citations

Citing this work

If you find our code / paper / data useful to your research, please consider citing:

@inproceedings{ma2020cape,
    title = {Learning to Dress 3D People in Generative Clothing},
    author = {Ma, Qianli and Yang, Jinlong and Ranjan, Anurag and Pujades, Sergi and Pons-Moll, Gerard and Tang, Siyu and Black, Michael J.},
    booktitle = {Computer Vision and Pattern Recognition (CVPR)},
    month = jun,
    year = {2020},
    month_numeric = {6}
}

Related projects

SCALE (CVPR 2021): We use a novel explicit representation --- hundreds of local surface patches -- to model pose-dependent deformation of humans in clothing, including those wearing jackets and skirts!

SCANimate (CVPR 2021): Trained on the CAPE dataset, we use implicit functions to build avatars directly from raw scans, with pose-dependent clothing deformation, without the need for surface registration or clothing/body template. Check it out!

CoMA (ECCV 2018): Our (non-conditional) convolutional mesh autoencoder for modeling extreme facial expressions. The codes of the CAPE repository are based on the repository of CoMA. If you find the code of this repository useful, please consider also citing CoMA.

ClothCap (SIGGRAPH 2017): Our method of capturing and registering clothed humans from 4D scans. The CAPE dataset released with our paper incorporates the scans and registrations from ClothCap. Check out our project website for the data!

cape's People

Contributors

anuragranj avatar qianlim avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cape's Issues

Mapping 3D T-shirt to Z variable

How can we map an unknown T-Shirt style to the Z variable?
Basically, Instead of sampling, I want to dress a person in a specific 3D garment (say T-shirt). Is it possible with the CAPE ?.

RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

Hi, Qian~
I really appreciate your great work!
I want to run the demo, but met a problem:

when I ran the command:
python main.py --config configs/CAPE-affineconv_nz64_pose32_clotype32_male.yaml --mode demo

the terminal's log is:

Pre-computing mesh pooling matrices ..

loading pre-saved transform matrices...
Building model graph...

WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.

condition_pose_fc1: (126, 63)
condition_pose_fc2: (63, 32)
condition_clo_label_fc1: (4, 32)
condition_pose_fc1: (126, 63)
condition_pose_fc2: (63, 32)
condition_clo_label_fc1: (4, 32)

------------[Generator]------------
------------Encoder------------
encoder_conv1: (6890, 64), K=2
encoder_conv2: (3445, 64), K=2
encoder_conv3: (3445, 128), K=2
encoder_conv4: (1723, 128), K=2
encoder_conv5: (1723, 256), K=2
encoder_conv6: (862, 256), K=2
encoder_conv7: (862, 512), K=2
encoder_conv8: (862, 512), K=2
encoder_1x1conv: (862, 64), K=1
encoder_fc_mean: (55168, 64)
encoder_fc_logvar: (55168, 64)
------------Decoder------------
decoder_fc1: (128, 55168)
decoder_1x1conv: (862, 512), K=1
decoder_resblock_affine1: (862, 256), K=2
decoder_resblock_affine2: (862, 256), K=2
decoder_resblock_affine3: (1723, 128), K=2
decoder_resblock_affine4: (1723, 128), K=2
decoder_resblock_affine5: (3445, 64), K=2
decoder_resblock_affine6: (3445, 64), K=2
decoder_resblock_affine7: (6890, 32), K=2
decoder_resblock_affine8: (6890, 32), K=2
decoder_output: (6890, 3), K=2

----------[Discriminator]----------
conv1: (3445, 64), K=3
conv2: (1723, 64), K=3
conv3: (862, 128), K=3
conv4: (431, 128), K=3
pred_map: (431, 1), K=3

----------[Discriminator]----------
conv1: (3445, 64), K=3
conv2: (1723, 64), K=3
conv3: (862, 128), K=3
conv4: (431, 128), K=3
pred_map: (431, 1), K=3

For generative experiments:
condition_pose_fc1: (126, 63)
condition_pose_fc2: (63, 32)
condition_clo_label_fc1: (4, 32)
------------Encoder------------
encoder_conv1: (6890, 64), K=2
encoder_conv2: (3445, 64), K=2
encoder_conv3: (3445, 128), K=2
encoder_conv4: (1723, 128), K=2
encoder_conv5: (1723, 256), K=2
encoder_conv6: (862, 256), K=2
encoder_conv7: (862, 512), K=2
encoder_conv8: (862, 512), K=2
encoder_1x1conv: (862, 64), K=1
encoder_fc_mean: (55168, 64)
encoder_fc_logvar: (55168, 64)
------------Decoder------------
decoder_fc1: (128, 55168)
decoder_1x1conv: (862, 512), K=1
decoder_resblock_affine1: (862, 256), K=2
decoder_resblock_affine2: (862, 256), K=2
decoder_resblock_affine3: (1723, 128), K=2
decoder_resblock_affine4: (1723, 128), K=2
decoder_resblock_affine5: (3445, 64), K=2
decoder_resblock_affine6: (3445, 64), K=2
decoder_resblock_affine7: (6890, 32), K=2
decoder_resblock_affine8: (6890, 32), K=2
decoder_output: (6890, 3), K=2

2021-03-17 12:20:43.953960: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2021-03-17 12:20:44.184974: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x558d71c02640 executing computations on platform CUDA. Devices:
2021-03-17 12:20:44.185006: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): TITAN X (Pascal), Compute Capability 6.1
2021-03-17 12:20:44.185014: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (1): TITAN X (Pascal), Compute Capability 6.1
2021-03-17 12:20:44.203509: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3598130000 Hz
2021-03-17 12:20:44.204166: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x558d71c74fd0 executing computations on platform Host. Devices:
2021-03-17 12:20:44.204198: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2021-03-17 12:20:44.204352: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:01:00.0
totalMemory: 11.91GiB freeMemory: 11.12GiB
2021-03-17 12:20:44.204409: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 1 with properties:
name: TITAN X (Pascal) major: 6 minor: 1 memoryClockRate(GHz): 1.531
pciBusID: 0000:03:00.0
totalMemory: 11.91GiB freeMemory: 11.77GiB
2021-03-17 12:20:44.206252: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0, 1
2021-03-17 12:20:44.209304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-03-17 12:20:44.209335: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 1
2021-03-17 12:20:44.209349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N Y
2021-03-17 12:20:44.209360: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 1: Y N
2021-03-17 12:20:44.209479: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10813 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-03-17 12:20:44.209972: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 11446 MB memory) -> physical GPU (device: 1, name: TITAN X (Pascal), pci bus id: 0000:03:00.0, compute capability: 6.1)
2021-03-17 12:20:45.017010: I tensorflow/stream_executor/dso_loader.cc:152] successfully opened CUDA library libcublas.so.10.0 locally

=============== Running demo: fix z, clotype, change pose ===============

Found 6 different pose, for each we generate 5 samples

2021-03-17 12:20:45.231372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0, 1
2021-03-17 12:20:45.231460: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-03-17 12:20:45.231470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 1
2021-03-17 12:20:45.231477: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N Y
2021-03-17 12:20:45.231483: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 1: Y N
2021-03-17 12:20:45.231553: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10813 MB memory) -> physical GPU (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0, compute capability: 6.1)
2021-03-17 12:20:45.231726: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 11446 MB memory) -> physical GPU (device: 1, name: TITAN X (Pascal), pci bus id: 0000:03:00.0, compute capability: 6.1)
saving results as .obj files to /home/ang/CAPE-master/results/CAPE-affineconv_nz64_pose32_clotype32_male/sample_vary_pose...
Traceback (most recent call last):
File "main.py", line 109, in
demos.run()
File "/home/ang/CAPE-master/demos.py", line 335, in run
self.sample_vary_pose()
File "/home/ang/CAPE-master/demos.py", line 164, in sample_vary_pose
save_obj=self.save_obj, obj_dir=obj_dir)
File "/home/ang/CAPE-master/demos.py", line 324, in pose_result_onepose_multisample
self.smpl_model.body_pose[:] = torch.from_numpy(pose_params[i][3:])
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

I've searched ways on google but still cannot solve it.
So I hope you can give me some advice. Thanks a lot!

Best wishes

RuntimeError: size of dimension does not match previous size, operand 1, dim 2

I followed all your README steps to build the project.
Using all your pre-trained models and corresponding config files will get the RuntimeError.
image
the error happened on this line of code:
image
where the shape of betas and shape_disps are (1, 10) and (6890, 3, 300).
I want to know where does the error happen? Thanks a lot!

Custom 3d data.

Hi,
I have some 3d scans of my own clothes. I want to extract z-parameters for those scans. I did a bit of research and found out that my garment scans must be registered to your data. May i get some help on this registration task? How can i register my own scans to any dataset that is publicily available? A sample code would be much appreciated. If that wont be possible, can you give me the steps people use for doing this non-rigid mesh registration task.

Thanks.

No module named 'psbody'

WoW! Awesome work!
I have some trouble with your released code:

  1. ModuleNotFoundError: No module named 'psbody'. I did'n find the psbody.py
  File "main.py", line 8, in <module>
    from psbody.mesh import Mesh
ModuleNotFoundError: No module named 'psbody'
  1. I'm pretty interested in the SMPL and CAPE, so could you send me a copy of code about SMPL in tf? It'll be important for me to learn the CAPE.
    Thanks for sharing the gread work!

Error while installing requirements on codelab

I am facing this error and not able to run the demo because of this. I tried looking at the closed issues to make sure I am not reporting the same issue again. Please let me know if I missed anything.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchvision 0.13.1+cu113 requires torch==1.12.1, but you have torch 1.2.0 which is incompatible.
torchtext 0.13.1 requires torch==1.12.1, but you have torch 1.2.0 which is incompatible.
torchaudio 0.12.1+cu113 requires torch==1.12.1, but you have torch 1.2.0 which is incompatible.
fastai 2.7.9 requires torch<1.14,>=1.7, but you have torch 1.2.0 which is incompatible.

Pytorch in requirements

Hi! Great work and thank you for making the code public.
I think the torch==1.2.0 is an overhead in the requirements.txt file. Would be helpful for the end-user if the over head can be removed.
TIA

Demo can save the results of obj, but it cannot display the results

Hi, I can run your program and save the results of the model, but it doesn't seem to be able to visualize the model effect. The display effect is a black background, and there is no rendered model. It may be the problem of meshviewers in psbody, but I used different versions of psbody, and the problem is still not solved. More specifically, it seems that meshviewers are not initialized, but it is not clear how to solve it
image

viewer = MeshViewers(shape=(1, 2), titlebar=titlebar):
image

Fail to run demo due to missing data files

Traceback (most recent call last):
File "main.py", line 27, in
reference_mesh_file=reference_mesh_file)
File "/home/xxx/CAPE/lib/load_data.py", line 51, in init
self.load()
File "/home/xxx/CAPE/lib/load_data.py", line 62, in load
vertices_train = np.load(self.train_mesh_fn)
File "/home/xxx/anaconda3/envs/tf1.13/lib/python3.6/site-packages/numpy/lib/npyio.py", line 416, in load
fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: '/home/xxx/CAPE/data/datasets/dataset_male_4clotypes/train/train_disp.npy'

How can I fix this? Thx!

two different group norm weights with same name in decoders

when I list the pretrained model using tf.train.list_variables, I get results as follows. You can see there are two different group norm weights used by same decoder_resblock such as generator_1/decoder/decoder_resblock_cmr1/group_norm/beta and generator/decoder/decoder_resblock_cmr1/group_norm/beta. I want to transfer the pretrained model from tensorflow to pytorch, so I am not sure which I should transfer to.

[('condition_clo_label/fc1/dense/bias', [8]),
('condition_clo_label/fc1/dense/bias/Momentum', [8]),
('condition_clo_label/fc1/dense/kernel', [4, 8]),
('condition_clo_label/fc1/dense/kernel/Momentum', [4, 8]),
('condition_pose/fc1/dense/bias', [63]),
('condition_pose/fc1/dense/bias/Momentum', [63]),
('condition_pose/fc1/dense/kernel', [126, 63]),
('condition_pose/fc1/dense/kernel/Momentum', [126, 63]),
('condition_pose/fc2/dense/bias', [24]),
('condition_pose/fc2/dense/bias/Momentum', [24]),
('condition_pose/fc2/dense/kernel', [63, 24]),
('condition_pose/fc2/dense/kernel/Momentum', [63, 24]),
('discriminator/prediction_map/weights', [256, 1]),
('discriminator/prediction_map/weights/Momentum', [256, 1]),
('discriminator/shared/conv1/bias', [1, 1, 64]),
('discriminator/shared/conv1/bias/Momentum', [1, 1, 64]),
('discriminator/shared/conv1/weights', [105, 64]),
('discriminator/shared/conv1/weights/Momentum', [105, 64]),
('discriminator/shared/conv2/bias', [1, 1, 64]),
('discriminator/shared/conv2/bias/Momentum', [1, 1, 64]),
('discriminator/shared/conv2/weights', [192, 64]),
('discriminator/shared/conv2/weights/Momentum', [192, 64]),
('discriminator/shared/conv3/bias', [1, 1, 128]),
('discriminator/shared/conv3/bias/Momentum', [1, 1, 128]),
('discriminator/shared/conv3/weights', [192, 128]),
('discriminator/shared/conv3/weights/Momentum', [192, 128]),
('discriminator/shared/conv4/bias', [1, 1, 128]),
('discriminator/shared/conv4/bias/Momentum', [1, 1, 128]),
('discriminator/shared/conv4/weights', [384, 128]),
('discriminator/shared/conv4/weights/Momentum', [384, 128]),
('generator/decoder/1x1-conv/weights', [64, 512]),
('generator/decoder/1x1-conv/weights/Momentum', [64, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_conv/weights', [512, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_conv/weights/Momentum',
[512, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_1/weights', [544, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_1/weights/Momentum',
[544, 256]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_2/weights', [256, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_2/weights/Momentum',
[256, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_input/weights',
[544, 512]),
('generator/decoder/decoder_resblock_cmr1/graph_linear_input/weights/Momentum',
[544, 512]),
('generator/decoder/decoder_resblock_cmr1/group_norm/beta', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm/beta/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm/gamma', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm/gamma/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/beta', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/gamma', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_1/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/beta', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/gamma', [256]),
('generator/decoder/decoder_resblock_cmr1/group_norm_2/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/graph_conv/weights', [512, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_conv/weights/Momentum',
[512, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_1/weights', [544, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_1/weights/Momentum',
[544, 256]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_2/weights', [256, 512]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_2/weights/Momentum',
[256, 512]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_input/weights',
[544, 512]),
('generator/decoder/decoder_resblock_cmr2/graph_linear_input/weights/Momentum',
[544, 512]),
('generator/decoder/decoder_resblock_cmr2/group_norm/beta', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm/beta/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm/gamma', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm/gamma/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/beta', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/gamma', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_1/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/beta', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/beta/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/gamma', [256]),
('generator/decoder/decoder_resblock_cmr2/group_norm_2/gamma/Momentum', [256]),
('generator/decoder/decoder_resblock_cmr3/graph_conv/weights', [256, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_conv/weights/Momentum',
[256, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_1/weights', [544, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_1/weights/Momentum',
[544, 128]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_2/weights', [128, 256]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_2/weights/Momentum',
[128, 256]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_input/weights',
[544, 256]),
('generator/decoder/decoder_resblock_cmr3/graph_linear_input/weights/Momentum',
[544, 256]),
('generator/decoder/decoder_resblock_cmr3/group_norm/beta', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm/beta/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm/gamma', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm/gamma/Momentum', [544]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/beta', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/gamma', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_1/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/beta', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/gamma', [128]),
('generator/decoder/decoder_resblock_cmr3/group_norm_2/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/graph_conv/weights', [256, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_conv/weights/Momentum',
[256, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_1/weights', [288, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_1/weights/Momentum',
[288, 128]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_2/weights', [128, 256]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_2/weights/Momentum',
[128, 256]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_input/weights',
[288, 256]),
('generator/decoder/decoder_resblock_cmr4/graph_linear_input/weights/Momentum',
[288, 256]),
('generator/decoder/decoder_resblock_cmr4/group_norm/beta', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm/beta/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm/gamma', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm/gamma/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/beta', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/gamma', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_1/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/beta', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/beta/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/gamma', [128]),
('generator/decoder/decoder_resblock_cmr4/group_norm_2/gamma/Momentum', [128]),
('generator/decoder/decoder_resblock_cmr5/graph_conv/weights', [128, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_conv/weights/Momentum',
[128, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_1/weights', [288, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_1/weights/Momentum',
[288, 64]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_2/weights', [64, 128]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_2/weights/Momentum',
[64, 128]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_input/weights',
[288, 128]),
('generator/decoder/decoder_resblock_cmr5/graph_linear_input/weights/Momentum',
[288, 128]),
('generator/decoder/decoder_resblock_cmr5/group_norm/beta', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm/beta/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm/gamma', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm/gamma/Momentum', [288]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/beta', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/gamma', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_1/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/beta', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/gamma', [64]),
('generator/decoder/decoder_resblock_cmr5/group_norm_2/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/graph_conv/weights', [128, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_conv/weights/Momentum',
[128, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_1/weights', [160, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_1/weights/Momentum',
[160, 64]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_2/weights', [64, 128]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_2/weights/Momentum',
[64, 128]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_input/weights',
[160, 128]),
('generator/decoder/decoder_resblock_cmr6/graph_linear_input/weights/Momentum',
[160, 128]),
('generator/decoder/decoder_resblock_cmr6/group_norm/beta', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm/beta/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm/gamma', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm/gamma/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/beta', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/gamma', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_1/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/beta', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/beta/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/gamma', [64]),
('generator/decoder/decoder_resblock_cmr6/group_norm_2/gamma/Momentum', [64]),
('generator/decoder/decoder_resblock_cmr7/graph_conv/weights', [64, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_conv/weights/Momentum',
[64, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_1/weights', [160, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_1/weights/Momentum',
[160, 32]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_2/weights', [32, 64]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_2/weights/Momentum',
[32, 64]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_input/weights',
[160, 64]),
('generator/decoder/decoder_resblock_cmr7/graph_linear_input/weights/Momentum',
[160, 64]),
('generator/decoder/decoder_resblock_cmr7/group_norm/beta', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm/beta/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm/gamma', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm/gamma/Momentum', [160]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/beta', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/gamma', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_1/gamma/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/beta', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/gamma', [32]),
('generator/decoder/decoder_resblock_cmr7/group_norm_2/gamma/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/graph_conv/weights', [64, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_conv/weights/Momentum',
[64, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_1/weights', [96, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_1/weights/Momentum',
[96, 32]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_2/weights', [32, 64]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_2/weights/Momentum',
[32, 64]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_input/weights',
[96, 64]),
('generator/decoder/decoder_resblock_cmr8/graph_linear_input/weights/Momentum',
[96, 64]),
('generator/decoder/decoder_resblock_cmr8/group_norm/beta', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm/beta/Momentum', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm/gamma', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm/gamma/Momentum', [96]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/beta', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/gamma', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_1/gamma/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/beta', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/beta/Momentum', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/gamma', [32]),
('generator/decoder/decoder_resblock_cmr8/group_norm_2/gamma/Momentum', [32]),
('generator/decoder/fc1/dense/bias', [55168]),
('generator/decoder/fc1/dense/bias/Momentum', [55168]),
('generator/decoder/fc1/dense/kernel', [50, 55168]),
('generator/decoder/fc1/dense/kernel/Momentum', [50, 55168]),
('generator/decoder/outputs/bias', [1, 6890, 3]),
('generator/decoder/outputs/bias/Momentum', [1, 6890, 3]),
('generator/decoder/outputs/weights', [192, 3]),
('generator/decoder/outputs/weights/Momentum', [192, 3]),
('generator/encoder/1x1-conv/weights', [512, 64]),
('generator/encoder/1x1-conv/weights/Momentum', [512, 64]),
('generator/encoder/encoder_conv1/bias', [1, 1, 64]),
('generator/encoder/encoder_conv1/bias/Momentum', [1, 1, 64]),
('generator/encoder/encoder_conv1/weights', [6, 64]),
('generator/encoder/encoder_conv1/weights/Momentum', [6, 64]),
('generator/encoder/encoder_conv2/bias', [1, 1, 64]),
('generator/encoder/encoder_conv2/bias/Momentum', [1, 1, 64]),
('generator/encoder/encoder_conv2/weights', [128, 64]),
('generator/encoder/encoder_conv2/weights/Momentum', [128, 64]),
('generator/encoder/encoder_conv3/bias', [1, 1, 128]),
('generator/encoder/encoder_conv3/bias/Momentum', [1, 1, 128]),
('generator/encoder/encoder_conv3/weights', [128, 128]),
('generator/encoder/encoder_conv3/weights/Momentum', [128, 128]),
('generator/encoder/encoder_conv4/bias', [1, 1, 128]),
('generator/encoder/encoder_conv4/bias/Momentum', [1, 1, 128]),
('generator/encoder/encoder_conv4/weights', [256, 128]),
('generator/encoder/encoder_conv4/weights/Momentum', [256, 128]),
('generator/encoder/encoder_conv5/bias', [1, 1, 256]),
('generator/encoder/encoder_conv5/bias/Momentum', [1, 1, 256]),
('generator/encoder/encoder_conv5/weights', [256, 256]),
('generator/encoder/encoder_conv5/weights/Momentum', [256, 256]),
('generator/encoder/encoder_conv6/bias', [1, 1, 256]),
('generator/encoder/encoder_conv6/bias/Momentum', [1, 1, 256]),
('generator/encoder/encoder_conv6/weights', [512, 256]),
('generator/encoder/encoder_conv6/weights/Momentum', [512, 256]),
('generator/encoder/encoder_conv7/bias', [1, 1, 512]),
('generator/encoder/encoder_conv7/bias/Momentum', [1, 1, 512]),
('generator/encoder/encoder_conv7/weights', [512, 512]),
('generator/encoder/encoder_conv7/weights/Momentum', [512, 512]),
('generator/encoder/encoder_conv8/bias', [1, 1, 512]),
('generator/encoder/encoder_conv8/bias/Momentum', [1, 1, 512]),
('generator/encoder/encoder_conv8/weights', [1024, 512]),
('generator/encoder/encoder_conv8/weights/Momentum', [1024, 512]),
('generator/encoder/fc_mean/dense/bias', [18]),
('generator/encoder/fc_mean/dense/bias/Momentum', [18]),
('generator/encoder/fc_mean/dense/kernel', [55168, 18]),
('generator/encoder/fc_mean/dense/kernel/Momentum', [55168, 18]),
('generator/encoder/fc_var/dense/bias', [18]),
('generator/encoder/fc_var/dense/bias/Momentum', [18]),
('generator/encoder/fc_var/dense/kernel', [55168, 18]),
('generator/encoder/fc_var/dense/kernel/Momentum', [55168, 18]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm/beta', [544]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm/gamma', [544]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_1/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_1/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_2/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr1/group_norm_2/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm/beta', [544]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm/gamma', [544]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_1/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_1/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_2/beta', [256]),
('generator_1/decoder/decoder_resblock_cmr2/group_norm_2/gamma', [256]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm/beta', [544]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm/gamma', [544]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_1/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_1/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_2/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr3/group_norm_2/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm/beta', [288]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm/gamma', [288]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_1/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_1/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_2/beta', [128]),
('generator_1/decoder/decoder_resblock_cmr4/group_norm_2/gamma', [128]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm/beta', [288]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm/gamma', [288]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_1/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_1/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_2/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr5/group_norm_2/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm/beta', [160]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm/gamma', [160]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_1/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_1/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_2/beta', [64]),
('generator_1/decoder/decoder_resblock_cmr6/group_norm_2/gamma', [64]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm/beta', [160]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm/gamma', [160]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_1/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_1/gamma', [32]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_2/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr7/group_norm_2/gamma', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm/beta', [96]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm/gamma', [96]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_1/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_1/gamma', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_2/beta', [32]),
('generator_1/decoder/decoder_resblock_cmr8/group_norm_2/gamma', [32]),
('loss/total_loss/add_4/ExponentialMovingAverage', []),
('loss/total_loss/add_5/ExponentialMovingAverage', []),
('training/global_step', [])]

smpl_model_folder

Great work! To run the demo, we need to set the path to SMPL model folder.

python main.py --config configs/config.yaml --mode demo --vis_demo 1 --smpl_model_folder <path to SMPL model folder>

However, I didn't find the SMPL files in this repo(I may miss it). Could you provide the files or the links to download them? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.