Giter Site home page Giter Site logo

loopreg's People

Contributors

bharat-b7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

loopreg's Issues

kaolin.models

I think we need to install v0.1 for kaolin since v0.9 does not have modules "models" in kaolin but v0.1 does.

Using python 3.6
kaolin v0.1 does not seem to install well on GPU's I have(tested on rtx3080).

using python 3.7
problem with kaolin.nnsearch

v0.9 installs fine but it just does not have modules kaolin.models.

Can you confirm we need v0.1 of kaolin?

Thanks.

Virtruvian vertex color

Thanks for sharing the code!

It seems that file "vitruvian_cols.npy" is missing when validation.

Issue: Running LoopReg on Windows

Hi,

I am trying to run LoopReg on Windows and when running this command -python train_PartSpecificNet.py 1 -mode val -save_name corr -batch_size 16 -split_file assets\data_split_01.pklI get couple of errors - no dataset nuscenes from kaolin and error reading smpl_vt_ft.pkl file. Can you kindly help me with this -


(LoopReg) D:\CG_Source\NeRFs\3D_Avatar_Pipeline\LoopReg>python train_PartSpecificNet.py 1 -mode val -save_name corr -batch_size 16 -split_file assets\data_split_01.pkl
Warning: unable to import datasets/nusc:
   No module named 'nuscenes'
Traceback (most recent call last):
  File "C:\miniconda3\envs\LoopReg\lib\site-packages\kaolin-0.1.0-py3.7-win-amd64.egg\kaolin\datasets\__init__.py", line 11, in <module>
    from .nusc import NuscDetection
  File "C:\miniconda3\envs\LoopReg\lib\site-packages\kaolin-0.1.0-py3.7-win-amd64.egg\kaolin\datasets\nusc.py", line 21, in <module>
    from nuscenes.utils.geometry_utils import transform_matrix
ModuleNotFoundError: No module named 'nuscenes'
Warning: unable to import datasets/nusc:
   None
Using BLB SMPL from the project: LoopReg
Not initializing with pre-trained supervised correspondence network
Traceback (most recent call last):
  File "train_PartSpecificNet.py", line 123, in <module>
    pretrained_path=args.pretrained_path, checkpoint_number=args.checkpoint_number, split_file=args.split_file)
  File "train_PartSpecificNet.py", line 68, in main
    num_workers=16, naked=naked).get_loader(shuffle=False)
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\LoopReg\data_loader\data_loader.py", line 155, in __init__
    self.vt, self.ft = sp.get_vt_ft()
  File "D:\CG_Source\NeRFs\3D_Avatar_Pipeline\LoopReg\lib\smpl_paths.py", line 97, in get_vt_ft
    vt, ft = pkl.load(open(smpl_vt_ft_path, 'r'), encoding='latin-1')
TypeError: a bytes-like object is required, not 'str'

How to prepare data for supervised training

Can you describe how to prepare data for supervised training?
What does it mean by registered SMPL models in file numbers and formats?
And how do we organize in data folder splitted by split? Split is folder level split or file level?
Thanks.

About FAUST dataset

Hello, thank you for your nice work.
And, when I test FAUST dataset, I get some error.
like this.
image
image
it is test_scans_135.ply
Do I need to preprocess the data? Or it something wrong in my computer?

'spread_SMPL_function.py' error

Hi,
I'm trying to use 'spread_SMPL_function.py' to spread smpl in voxel grid, and save it in *.pkl files.
I changed paths to pkl smpl models and run, but it fails before save 'posedirs.pkl'.
Here is my output:
Screenshot from 2021-03-23 16-58-27
After that i tried to uncomment line 79:

assert closest_points == barycentric_interpolation(smpl_mesh.v[vert_ids], bary_coords)

and seems like is something wrong in interpolation:
Screenshot from 2021-03-23 17-01-32
Is there something i can do to solve it?
I'm using Ubuntu 18.04, python 3.6.9 and pytorch 1.4.0

Thanks for share your work!

Datas lost in 'smpl_path.py'

Hi,
I found that some datas loaded by 'smpl_path.py' were lost in the released code:

@staticmethod
def get_template_file():
    fname = join(ROOT, 'template', 'template.obj')
    return fname

@staticmethod
def get_template():
    return Mesh(filename=SmplPaths.get_template_file())

@staticmethod
def get_faces():
    fname = join(ROOT, 'template', 'faces.npy')
    return np.load(fname)

@staticmethod
def get_bmap():
    fname = join(ROOT, 'template', 'bmap.npy')
    return np.load(fname)

@staticmethod
def get_fmap():
    fname = join(ROOT, 'template', 'fmap.npy')
    return np.load(fname)

@staticmethod
def get_bmap_hres():
    fname = join(ROOT, 'template', 'bmap_hres.npy')
    return np.load(fname)

@staticmethod
def get_fmap_hres():
    fname = join(ROOT, 'template', 'fmap_hres.npy')
    return np.load(fname)

Would you kindly provide these templete datas? Thanks~

An error when I train the network use my dataset

Traceback (most recent call last):
File "train_PartSpecificNet.py", line 128, in
checkpoint_number=args.checkpoint_number, split_file=args.split_file)
File "train_PartSpecificNet.py", line 65, in main
trainer.train_model(epochs)
File "/home/djq19/workfiles/LoopReg/models/trainer.py", line 377, in train_model
for n, batch in enumerate(loop):
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/tqdm-4.32.1-py3.7.egg/tqdm/_tqdm.py", line 1005, in iter
for obj in iterable:
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in next
data = self._next_data()
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 838, in _next_data
return self._process_data(data)
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
data.reraise()
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
EOFError: Caught EOFError in DataLoader worker process 10.
Original Traceback (most recent call last):
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/djq19/anaconda3/envs/kaolin/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/djq19/workfiles/LoopReg/data_loader/data_loader.py", line 192, in getitem
smpl_dict = pkl.load(open(cache_list[-1], 'rb'), encoding='latin-1')
EOFError: Ran out of input

it occured at ep:3 16%

Pretrained weight (Not naked)

I've noticed that you have provided the model for naked input. Would you kindly provide the pre-trained model for non-naked input?

Thanks in advance.

Question: Can you provide the sample dataset to test running of LoopReg

Hi,

First, thank you for your amazing work. I am trying to run LoopReg and before I could train and test it against my custom data set, it would be great if you could provide a sample structure and data against which this works as expected. This allows to Train on new data and match with expected structure to get expected results. I hope you can share the learnt registrations and dataset from code -

#DATA_PATH = '/BS/bharat-2/static00/learnt_registration'
# add the folders you want to be the part of this dataset. Typically these would be the folders in side DATA_PATH
#datasets = ['axyz', 'renderpeople', 'renderpeople_rigged', 'th_good_1', 'th_good_3', 'julian', 'treedy']

pretrained models

Hello, where to download the pre-trained models? I think the link is missing~

_smpl.obj for the scans

Another question, sorry...

The corresponding smpl model of the scan is also required in data loader. So should I that use a T-pose SMPL mesh or I have to acquire some kinds of raw initialization of the certain scan?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.