Giter Site home page Giter Site logo

ycjungsubhuman / deepdeformable3dcaricatures Goto Github PK

View Code? Open in Web Editor NEW
89.0 89.0 7.0 9.5 MB

[SIGGRAPH 2022] Official code for "Deep Deformable 3D Caricatures with Learned Shape Control"

License: GNU Affero General Public License v3.0

Python 99.91% Dockerfile 0.09%
deep-learning pytorch

deepdeformable3dcaricatures's People

Contributors

ycjungsubhuman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

deepdeformable3dcaricatures's Issues

animation of mesh

Hello,
thanks for sharing this great paper and code.
I have a question: is it possible to do animation of the mesh output?
for example, BFM basis enables animation using manipulating of expressions.
is it possible to achieve this effect over the this model? if so, how?
Thanks,
Ofer

About texture

Hello,thank you for your great work!
but I have some questions about the texture mapping ,your work shows textured 3d models after editing,but I can‘t find related information in your paper, the dataset doesn't have texture information, too. Most work are mapping texture from a fixed image to a 3d model, while aftering editing the 3d model, there is no related accurate image about it, how do you fix it?
Can you tell me what methods or tools you have used in this step?

how to use MeshRenderer.render_mesh_tex method

hello.
I'm unfamiliar with trimesh and pyrender. And there is no demo code for MeshRenderer.render_mesh_tex method in this project. i wonder if you could give a demo of the method, and show how to get the param VT and texture.
image

What are the horlines in staticdata?

Thanks for your great work and I have a question about the files in staticdata. What is the horlines file for? I think it is related to the landmarks but not very clear about the structure. Is there any method to get those coordinates if I want to use other landmark annotation?

Waiting for your reply!

Training Parameter Recommendation for Dataset of Both Caricature and Regular Face Models

Hello! Thank you very much for sharing your research.

I am currently using your model to train a dataset of both 3DCariShop and regular 3D heads generated from CelebA. Both types of head models have the same connectivity and have similar number of samples (~1000).

Unfortunately the lowest training error I could achieve is around 40, using the same training parameters detailed in the DD3C paper with shuffling turned on for the training dataset.

Training Error over 3000 Epochs

Would you have any advice as to why the training error converges at such a high number and how to lower it? I have tried to follow the paper's methodology as closely as possible, with exception of the training dataset used. Thank you for your time and any help will be greatly appreciated.

My training code is as follows:

def train(opt, dirs, trainset_num, checkpt_dir='checkpoints', set_shuffle=True):
    checkpoints_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), opt.model_dir, checkpt_dir)

    utils.cond_mkdir(checkpoints_dir)
    meta_params = vars(opt)

    # set up model
    model = SurfaceDeformationField(sum(trainset_num), **meta_params).cuda()
    params = model.parameters()

    train_dataset = CombinedDataset(dirs=dirs, num_samples=11551)
    train_dataloader = DataLoader(dataset=train_dataset, batch_size=128, shuffle=set_shuffle)

    # set up optimiser
    optimizer = torch.optim.Adam(params, lr=1e-4)
    total_steps = 0
    steps_til_summary = opt.epochs_til_checkpoint

    with tqdm(total=len(train_dataloader) * 3000) as pbar:
        train_losses = []
        for epoch in range(3000):
            if not epoch % opt.epochs_til_checkpoint:
                torch.save(model.state_dict(),
                           os.path.join(checkpoints_dir, 'model_epoch_%04d.pth' % epoch))
                np.savetxt(os.path.join(checkpoints_dir, 'train_losses_epoch_%04d.txt' % epoch),
                           np.array(train_losses))
                # valid(model,valid_dataloader)

            for i, (model_input, gt) in enumerate(train_dataloader):
                model_input = {key: value.cuda() for key, value in model_input[0].items()}
                gt = {key: value.cuda() for key, value in gt[0].items()}

                losses = model.forward(model_input, gt)
                train_loss = 0.
                for loss_name, loss in losses.items():
                    single_loss = loss.mean()
                    train_loss += single_loss
                train_losses.append(train_loss.item())

                optimizer.zero_grad()
                train_loss.backward()
                optimizer.step()
                pbar.update(1)
                if not total_steps % steps_til_summary:
                    tqdm.write("Epoch %d, Total loss %0.6f, iteration time %0.6f" % (
                        epoch, train_loss, time.time() - start_time))
                total_steps += 1

        torch.save(model.state_dict(),
                   os.path.join(checkpoints_dir, 'model_final.pth'))
        np.savetxt(os.path.join(checkpoints_dir, 'train_losses_final.txt'),
                   np.array(train_losses))

How to obtain latent code of new head with trained model

Hi, may I ask how do I obtain the latent code of a new head (not in training set) given a trained model?

On a similar topic, may I ask if the latent code of the template head is already given or must it be generated anew from a trained model?

Thanks for reading, any help is greatly appreciated!

How to train the model

Hi!
Your paper is well written!
If I get the 3DCaricShop data set, how can I train the model next?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.