Giter Site home page Giter Site logo

texture coordinate problem about dirt HOT 22 CLOSED

pmh47 avatar pmh47 commented on June 10, 2024
texture coordinate problem

from dirt.

Comments (22)

shinxg avatar shinxg commented on June 10, 2024

obj files compress the vt coordinate, look at the face part. It is in following format:
f v//vn//vt v//vn//vt v//vn//vt
v vn vt are indice.

from dirt.

quyanqiu avatar quyanqiu commented on June 10, 2024

@shinxg,yes,because of the multi-edge "seams"effect,one vertex may corresponds to multiple texture coordinate,in marjority of 3D models they use perWedge UV map instead of perVertex UV map,so my question is how to compress the UV map so that make it own the same dimension of vertices number,to fit the rasterization operation in DIRT?

from dirt.

shinxg avatar shinxg commented on June 10, 2024

For those vertices which have different uv coordinates on different faces, you can add some duplicated vertices with corresponding uv coordinates.

from dirt.

quyanqiu avatar quyanqiu commented on June 10, 2024

For some reason ,I can not change the vertics number of the PCA model,I wonder if have any other solution to do texture rendering without using vertex-color because vertex-color is not realistic compare to texture rendering

from dirt.

pmh47 avatar pmh47 commented on June 10, 2024

@quyanqiu You need to duplicate the vertices when they have different UVs, as @shinxg stated. First load the vertex positions and UVs from the obj file into separate arrays, and also the indices from the faces.

obj_vertices = ...
obj_uvs = ...
obj_vertex_indices = ... - 1
obj_uv_indices = ... - 1
assert len(obj_vertex_indices) == len(obj_uv_indices)

Then, build a new set of vertices, UVs, and faces, with each vertex/UV used exactly once per face, something like

expanded_vertices = []
expanded_uvs = []
expanded_faces = []
for face_index in range(len(obj_vertex_indices)):
  expanded_faces.append([len(expanded_vertices), len(expanded_vertices) + 1, len(expanded_vertices) + 2])
  expanded_vertices.extend(obj_vertices[obj_vertex_indices[face_index]])
  expanded_uvs.extend(obj_uvs[obj_uv_indices[face_index]])
assert len(expanded_vertices) == len(expanded_uvs) == len(expanded_faces) * 3

Then use the expanded_* in the call to rasterise, concatenating as in the sample code.
Note I didn't run any of the above code -- there may be bugs, but the idea is there.

If you need the vertex positions to depend on some other tensor (rather than being constant), you may need to do the 'expansion' with tf.gather instead of a loop and numpy indexing.

from dirt.

quyanqiu avatar quyanqiu commented on June 10, 2024

@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try

Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this?
Thanks.

from dirt.

shinxg avatar shinxg commented on June 10, 2024

@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try

Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this?
Thanks.

def convert_2_pervertex_uv(V, F, UV, TF):
    V_new = np.zeros((F.shape[0]*3, 3))
    UV_new = np.zeros((F.shape[0]*3, 2))
    F_new = np.zeros((F.shape[0], 3))
    for i in range(0, F.shape[0]):
        for j in range(0, 3):
            V_new[3*i+j, :] = V[F[i, j], :]
            UV_new[3*i+j, :] = UV[TF[i, j], :]
            F_new[i, j] = 3*i+j
    return V_new, F_new, UV_new

V: #V3
F: #F
3
UV: #UV2
TF: #F
3(texture indice for faces)
V F UV TF can be obtained here

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try

Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this?
Thanks.

def convert_2_pervertex_uv(V, F, UV, TF):
    V_new = np.zeros((F.shape[0]*3, 3))
    UV_new = np.zeros((F.shape[0]*3, 2))
    F_new = np.zeros((F.shape[0], 3))
    for i in range(0, F.shape[0]):
        for j in range(0, 3):
            V_new[3*i+j, :] = V[F[i, j], :]
            UV_new[3*i+j, :] = UV[TF[i, j], :]
            F_new[i, j] = 3*i+j
    return V_new, F_new, UV_new

V: #V_3 F: #F_3
UV: #UV_2 TF: #F_3(texture indice for faces)
V F UV TF can be obtained here

Hi~ Thanks!
But when I read in the model (SMPL, but high resolution):
image

And when I use the above code to transfer, I got:

Traceback (most recent call last):
  File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT.py", line 200, in <module>
    main()
  File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT.py", line 107, in main
    [V_new, F_new, UV_new] = convert_2_pervertex_uv(V, F, UV, TF)
  File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT.py", line 14, in convert_2_pervertex_uv
    V_new[3*i+j, :] = V[F[i, j], :]
IndexError: index 27554 is out of bounds for axis 0 with size 27554

I think this is caused by the index.

Thanks!

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@pmh47@shinxg thanks for your guys response,I think that’s the solution here, I will try

Hi~ Did you make it? I still confused how to make it, I tried and failed. Is there a method to do this?
Thanks.

def convert_2_pervertex_uv(V, F, UV, TF):
    V_new = np.zeros((F.shape[0]*3, 3))
    UV_new = np.zeros((F.shape[0]*3, 2))
    F_new = np.zeros((F.shape[0], 3))
    for i in range(0, F.shape[0]):
        for j in range(0, 3):
            V_new[3*i+j, :] = V[F[i, j], :]
            UV_new[3*i+j, :] = UV[TF[i, j], :]
            F_new[i, j] = 3*i+j
    return V_new, F_new, UV_new

V: #V_3 F: #F_3
UV: #UV_2 TF: #F_3(texture indice for faces)
V F UV TF can be obtained here

@shinxg Hi~ I think the problem is caused by index.
So I just modified the code:

   V_new[3*i+j, :] = V[F[i, j]-1, :]
  UV_new[3*i+j, :] = UV[TF[i, j]-1, :]
def convert_2_pervertex_uv(V, F, UV, TF):
    V_new = np.zeros((F.shape[0]*3, 3))
    UV_new = np.zeros((F.shape[0]*3, 2))
    F_new = np.zeros((F.shape[0], 3))
    for i in range(0, F.shape[0]):
        for j in range(0, 3):
            V_new[3*i+j, :] = V[F[i, j]-1, :]
            UV_new[3*i+j, :] = UV[TF[i, j]-1, :]
            F_new[i, j] = 3*i+j
    return V_new, F_new, UV_new

And get the output. And seems like the result is wrong:
image

The original input texture is:
image

from dirt.

quyanqiu avatar quyanqiu commented on June 10, 2024

@Frank-Dz the reason is smpl uv order is not compatible with OpenGL,try following code:
UV_new [:,1]*=-1,which means negtive your v coordinate

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@Frank-Dz the reason is smpl uv order is not compatible with OpenGL,try following code:
UV_new [:,1]*=-1,which means negtive your v coordinate

@quyanqiu Wow! Great! It works well for me!

Thanks!
Best,
Frank

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@Frank-Dz the reason is smpl uv order is not compatible with OpenGL,try following code:
UV_new [:,1]*=-1,which means negtive your v coordinate

Hi~ @quyanqiu Sorry for bothering you again. Do you know how to render an obj with a transparent background or specific image? I rendered some results, and all my results are like:
image

I know I should adjust the background_attributes, but I do not know how to set it.

pixels = dirt.rasterise_deferred(
        vertices=cube_vertices_clip,
        vertex_attributes=tf.concat([
            tf.ones_like(cube_vertices_object[:, :1]),  # mask
            cube_uvs,  # texture coordinates
            cube_normals_world  # normals
        ], axis=1),
        faces=cube_faces,
        background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
        shader_fn=shader_fn,
        shader_additional_inputs=[texture, light_direction]
    )

Thanks for any help and guidance!
Best,
Frank

from dirt.

quyanqiu avatar quyanqiu commented on June 10, 2024

@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code.
mask = (render_result[:,:,-1]>0)[...,np.newaxis]
overlay=render_result[:,:,:-1]*mask+(1-mask)*background

I do not debug these code,but the idea is here

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code.
mask = (render_result[:,:,-1]>0)[...,np.newaxis]
overlay=render_result[:,:,:-1]*mask+(1-mask)*background

I do not debug these code,but the idea is here

Thanks! But I am still a little confused. Hope not to be too bothering.

 pixels = dirt.rasterise_deferred(
        vertices=cube_vertices_clip,
        vertex_attributes=tf.concat([
            tf.ones_like(cube_vertices_object[:, :1]),  # mask
            cube_uvs,  # texture coordinates
            cube_normals_world  # normals
        ], axis=1),
        faces=cube_faces,
        background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
        shader_fn=shader_fn,
        shader_additional_inputs=[texture, light_direction]
    )
    mask = (pixels[:, :, -1] > 0)[..., np.newaxis]
    mybg = tf.ones([frame_height,frame_width,3])
    overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg
    save_pixels = tf.write_file(
        'textured.jpg',
        tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8))
    )

The render result is pixels and its rgb value is between 0-1.
Thus, I do not understand why it can be 4 dimension?
Can you give me more guidance?

Thank you very much!

Best,
Frank

from dirt.

quyanqiu avatar quyanqiu commented on June 10, 2024

@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code.
mask = (render_result[:,:,-1]>0)[...,np.newaxis]
overlay=render_result[:,:,:-1]*mask+(1-mask)*background
I do not debug these code,but the idea is here

Thanks! But I am still a little confused. Hope not to be too bothering.

 pixels = dirt.rasterise_deferred(
        vertices=cube_vertices_clip,
        vertex_attributes=tf.concat([
            tf.ones_like(cube_vertices_object[:, :1]),  # mask
            cube_uvs,  # texture coordinates
            cube_normals_world  # normals
        ], axis=1),
        faces=cube_faces,
        background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
        shader_fn=shader_fn,
        shader_additional_inputs=[texture, light_direction]
    )
    mask = (pixels[:, :, -1] > 0)[..., np.newaxis]
    mybg = tf.ones([frame_height,frame_width,3])
    overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg
    save_pixels = tf.write_file(
        'textured.jpg',
        tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8))
    )

The render result is pixels and its rgb value is between 0-1.
Thus, I do not understand why it can be 4 dimension?
Can you give me more guidance?

Thank you very much!

Best,
Frank

your code is wrong here:
overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg
its should be
overlay = pixels[:, :,:-1] * mask + (1 - mask) * mybg

as for why rgb value is between 0-1 and why its can be 4 dimension,May be you should know what is rasterization first,its a classical issue in computer graphics

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@Frank-Dz its very easy bro,the render result is RGBA format ,where the A channel indicate which pixel belong to the model,you can just doing substitution with following code.
mask = (render_result[:,:,-1]>0)[...,np.newaxis]
overlay=render_result[:,:,:-1]*mask+(1-mask)*background
I do not debug these code,but the idea is here

Thanks! But I am still a little confused. Hope not to be too bothering.

 pixels = dirt.rasterise_deferred(
        vertices=cube_vertices_clip,
        vertex_attributes=tf.concat([
            tf.ones_like(cube_vertices_object[:, :1]),  # mask
            cube_uvs,  # texture coordinates
            cube_normals_world  # normals
        ], axis=1),
        faces=cube_faces,
        background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
        shader_fn=shader_fn,
        shader_additional_inputs=[texture, light_direction]
    )
    mask = (pixels[:, :, -1] > 0)[..., np.newaxis]
    mybg = tf.ones([frame_height,frame_width,3])
    overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg
    save_pixels = tf.write_file(
        'textured.jpg',
        tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8))
    )

The render result is pixels and its rgb value is between 0-1.
Thus, I do not understand why it can be 4 dimension?
Can you give me more guidance?
Thank you very much!
Best,
Frank

your code is wrong here:
overlay = pixels[:, :,-1] * mask + (1 - mask) * mybg
its should be
overlay = pixels[:, :,:-1] * mask + (1 - mask) * mybg

as for why rgb value is between 0-1 and why its can be 4 dimension,May be you should know what is rasterization first,its a classical issue in computer graphics

Thanks again! So I just print it out:

pixels = dirt.rasterise_deferred(
        vertices=cube_vertices_clip,
        vertex_attributes=tf.concat([
            tf.ones_like(cube_vertices_object[:, :1]),  # mask
            cube_uvs,  # texture coordinates
            cube_normals_world  # normals
        ], axis=1),
        faces=cube_faces,
        background_attributes=(tf.ones([frame_height, frame_width, 6])*255),
        shader_fn=shader_fn,
        shader_additional_inputs=[texture, light_direction]
    )
    print(pixels.shape)

The output is "(600, 600, 3)".

And after using changed code

 mask = (pixels[:, :, -1] > 0)[..., np.newaxis]
    mybg = tf.ones([frame_height, frame_width, 3])
    overlay = pixels[:, :, :-1] * mask + (1 - mask) * mybg
    save_pixels = tf.write_file(
        'textured.jpg',
        tf.image.encode_jpeg(tf.cast(overlay * 255, tf.uint8))
    )

I got the following error:

WARNING:tensorflow:From /home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/dirt/matrices.py:41: calling norm (from tensorflow.python.ops.linalg_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
(600, 600, 3)
Traceback (most recent call last):
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1659, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Index out of range using input dim 3; input has only 3 dims for 'strided_slice_3' (op: 'StridedSlice') with input shapes: [600,600,3], [4], [4], [4] and with computed input tensors: input[3] = <1 1 1 1>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT_render.py", line 213, in <module>
    main()
  File "/home/frank/PycharmProjects/MultiGarmentNetwork/test_DIRT_render.py", line 201, in main
    overlay = pixels[:, :, :,-1] * mask + (1 - mask) * mybg
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 654, in _slice_helper
    name=name)
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 820, in strided_slice
    shrink_axis_mask=shrink_axis_mask)
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 9356, in strided_slice
    shrink_axis_mask=shrink_axis_mask, name=name)
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3300, in create_op
    op_def=op_def)
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1823, in __init__
    control_input_ops)
  File "/home/frank/Desktop/mesh-py3/my_venv/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1662, in _create_c_op
    raise ValueError(str(e))
ValueError: Index out of range using input dim 3; input has only 3 dims for 'strided_slice_3' (op: 'StridedSlice') with input shapes: [600,600,3], [4], [4], [4] and with computed input tensors: input[3] = <1 1 1 1>.

Process finished with exit code 1

That's why I am confused, the output of dirt.rasterise_deferred is a 3 channel image.

Thanks again!

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@quyanqiu Hi~ I know there is a code which uses rasterise:

  pixels = dirt.rasterise(
        vertices=cube_vertices_clip,
        faces=cube_faces,
        vertex_colors=vertex_colors_lit,
        background=tf.zeros([frame_height, frame_width, 3])*255,
        width=frame_width, height=frame_height, channels=3
    )
    print(pixels.shape)

The output is
(480, 640, 3)
Seems like the channel is 3 too.

The code is in sample\simple.py.

from dirt.

quyanqiu avatar quyanqiu commented on June 10, 2024

@quyanqiu Hi~ I know there is a code which uses rasterise:

  pixels = dirt.rasterise(
        vertices=cube_vertices_clip,
        faces=cube_faces,
        vertex_colors=vertex_colors_lit,
        background=tf.zeros([frame_height, frame_width, 3])*255,
        width=frame_width, height=frame_height, channels=3
    )
    print(pixels.shape)

The output is
(480, 640, 3)
Seems like the channel is 3 too.

The code is in sample\simple.py.

you may need consult the author for more detail ,if you just need rendering instead of backward,you could use other render such as pyrender

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

from dirt.

pmh47 avatar pmh47 commented on June 10, 2024

@Frank-Dz You need to use rasterise_deferred, similar to in samples/textured.py, but with an extra parameter to shader_fn for the background image. Assuming you have a background image bg_im of size [frame_height, frame_width, 3] then

  • add bg_im to the list in shader_additional_inputs parameter of rasterise_deferred
  • add bg_im as a new parameter at top of shader_fn
  • at the end of shader_fn change to something like pixels = (diffuse_contribution + ambient_contribution) * mask + bg_im * (1. - mask)

from dirt.

Frank-Dz avatar Frank-Dz commented on June 10, 2024

@Frank-Dz You need to use rasterise_deferred, similar to in samples/textured.py, but with an extra parameter to shader_fn for the background image. Assuming you have a background image bg_im of size [frame_height, frame_width, 3] then

  • add bg_im to the list in shader_additional_inputs parameter of rasterise_deferred
  • add bg_im as a new parameter at top of shader_fn
  • at the end of shader_fn change to something like pixels = (diffuse_contribution + ambient_contribution) * mask + bg_im * (1. - mask)

You saved my day! It works well for me!
image
Thanks!

from dirt.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.