Giter Site home page Giter Site logo

Comments (6)

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Hi, thanks for your interest in our work! The composed_scene here is the ground truth mesh and the added grasps. It's for visualization. If you want to get the 3D reconstruction, you can refer to this line: https://github.com/UT-Austin-RPL/GIGA/blob/d67c4388d334babe6c11c1555a6e848fb4828c84/scripts/eval_geometry_voxel.py#LL80C32-L80C32.

from giga.

chisarie avatar chisarie commented on September 24, 2024

Thank you for the fast response. What is the argument pc_in exactly? Is it the raw pointcloud from the camera, including all objects and the table they are sitting on?
I am trying to do inference from a depth camera I have in real life, not from the simulation

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Hi, we crop the raw point cloud to remove the table surface and only keep the objects.

pc = pc.crop(bounding_box)

from giga.

chisarie avatar chisarie commented on September 24, 2024

Hi, so I think I managed to put together a working script. Could you please let me know if this (pseudocode) looks correct? Thank you very much in advance.

from vgn.detection_implicit import VGNImplicit  # GIGA
from vgn.perception import TSDFVolume, create_tsdf
from vgn.ConvONets.conv_onet.generation import Generator3D

O_RESOLUTION = 40
O_SIZE = 0.3  # Meters --> I saw this somewhere in the repo
O_VOXEL_SIZE = O_SIZE / O_RESOLUTION

giga_model = VGNImplicit(
    model_path=model_path,
     model_type="giga",
     visualize=vis,
     best=True,
     qual_th=0.75,
     force_detection=True,
     out_th=0.1,
     select_top=False,
)
generator = Generator3D(
    self.giga_model.net,
    device=self.giga_model.device,
    input_type='pointcloud',
    padding=0,
)
tsdf_volume = create_tsdf(
    size=O_SIZE,
    resolution=O_RESOLUTION,
    depth_imgs=np.expand_dims(depth, axis=0),
    intrinsic=self.camera_intrinsic,
    extrinsics=np.expand_dims(camera_pose, axis=0),
)

pc_torch = torch.tensor(tsdf_volume.get_grid())
pred_mesh, _ = generator.generate_mesh({'inputs': pc_torch})

# scale back to original size
scale_matrix = np.eye(4)
scale_matrix[:3, :3] *= O_SIZE
pred_mesh.apply_transform(scale_matrix)

from giga.

Steve-Tod avatar Steve-Tod commented on September 24, 2024

Yes, I think it's correct. I just re-ran the code and found input to generate_mesh is with size torch.Size([1, 40, 40, 40]), so it's a tsdf grid.

from giga.

chisarie avatar chisarie commented on September 24, 2024

Great, thank you!

from giga.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.