Giter Site home page Giter Site logo

rotationcontinuity's People

Contributors

papagina avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rotationcontinuity's Issues

About the Geodesic Distance between Rotation Matrices

Hi! First of all, allow me to congratulate you on the paper. It's one of the best established and most innovative works I have read in the past months! I would like to ask 2 quick questions about the Geodesic Loss function that you are using,if that's convenient:

  1. Why do you constrain the input of torch.arcos between -1 and 1 by hand? Isn't it supposed to lie in that interval anyway? I mean, de facto the trace can't be bigger than 3,for example, because RgtRred^t belongs in SO(3),right? Did you notice it to be arithmetically unstable?

  2. Did you have any problems training any of the Architectures that you used with this loss function? I'm trying to do the same in an Object Pose Estimation problem but the network doesn't learn much both with old the representations and yours. Did you, by any mean, notice any Vanishing Gradient problem using it e.g.?

About ”pose"

Hello, I have a question about Sanitytest code.
I don't see what "pose" means and what this function below computes from the rotation matrix.

def compute_pose_from_rotation_matrix(T_pose, r_matrix):

It seems this function just calculates the transpose of the given matrix, is that right?

How we can compute Euler angles from Quaternion form?

I am trying to use the Quaternion form instead of rotation matrices.

I already trained my model, but when I need to test the model, I must compute Euler angles from the Quaternion form as you did with computing Euler angles from rotation matrices.

For example, the output in Quaternion form for B = batch_size = 2 (-> Bx4) is :

 tensor([[ 0.0725, -0.0645,  0.0308,  0.9948],
        [-0.5235, -0.2456,  0.0824,  0.8117]], device='cuda:0')

And, the output in rotation matrices form for B = batch_size = 2 (-> Bx3x3) :

tensor([[[ 0.9933, -0.0871,  0.0755],
         [ 0.0850,  0.9959,  0.0316],
         [-0.0779, -0.0250,  0.9966]],

        [[ 0.7244, -0.0551, -0.6871],
         [ 0.2493,  0.9503,  0.1866],
         [ 0.6427, -0.3065,  0.7021]]], device='cuda:0')

Could you help me with how we can compute Euler angles from Quaternion form?

About limitation for representation space in train code

First of all, thank you so much for the good work that has been consistently referenced over the years.
I have a question about limitation for representation space in your code.

In paper, you guys said

Finally, we can define the set D as that where the above Gram-Schmidt-like process does not map back to SO(n): specifically, this is where the dimension of the span of the n−1 vectors input to gGS is less than n − 1.

However, there is no code(here and here) that enforces linear independence of the two 3D vectors.

Was it intentionally omitted for convenience? Any good way to force it?

Thanks.

Expect the full code of the paper

I found 6d representation is very powerful, and I really look forward to your remain code of this project, to quickly check and utilize it in our future research.
Thanks very much!

NaN or Inf found in input tensor when runing trainIK.py

After a random number of iterations I get
NaN or Inf found in input tensor

I made some tests and insert a test in "train_one_iteraton", when this occurs there NaN at the output of get_augmented_gt_seq_for_training

I also checked there are no NaN values in data

which loss is better?

Hi, I find there are three losses in your code:
loss_t,loss_rmat,loss_geodesic
so, which loss is a best metrics to evaluate two translations (or two matrices)?

Perhaps bugs when converting axisangle to rotation matrix

Very impressive and elegant theory to push this field forward!

But I notice there may exist a bug.

axis, theta = normalize_vector(rod, return_mag=True)
sin = torch.sin(theta)
qw = torch.cos(theta)
qx = axis[:,0]*sin
qy = axis[:,1]*sin
qz = axis[:,2]*sin

theta = torch.tanh(axisAngle[:,0])*np.pi #[-180, 180]
sin = torch.sin(theta)
axis = normalize_vector(axisAngle[:,1:4]) #batch*3
qw = torch.cos(theta)
qx = axis[:,0]*sin
qy = axis[:,1]*sin
qz = axis[:,2]*sin

Code above are the function converting axisangle/rodrigues to rotation matrix.
The θ passed to torch.sin and torch.cos is the identity but should be θ/2 instead, referring to Maths - AxisAngle to Quaternion
I've hand-written a toy case (rotate y axis by 90 degrees around x axis) and only θ/2 is correct.

I wonder if it is a bug, and if so, does it influence other conversion functions and further influence result of experiments?

Composition of 2 rotations using 6d representation

Hi,

  1. Is there any way to compose two rotations using your representation except for going back and forth other representations?

  2. Is there a way to rotate a vector using your representation directly? (instead of using rotation matrix)

Thanks

Questions about continuous representation

Hi expert,
Nice work to build the continuous representation from rotation matrix(SO(3)).
I have two questions to ask for hold continuous representation.

  1. I would like to use linear interpolation for rotation matrix, so I tranform rotaiotn matrix to quaternion then using slerp.
    But as you mention about quaternion maybe discontinuous, and I would like to transform rotation matrix to representation
    space R and linear interpolation directly, Is this way could be continuous and correct interpolation method?

  2. I would like to know when I transform by PCA or kernel PCA for representation space R, Is it hold continuous after this transform?

sanity_check issue with geodesic loss

In order to avoid loss is Nan

In compute_geodesic_distance_from_two_matrices(m1, m2)
I changed
cos = torch.min(cos, torch.autograd.Variable(torch.ones(batch).cuda()) )
cos = torch.max(cos, torch.autograd.Variable(torch.ones(batch).cuda())*-1 )
to
cos = torch.min(cos, torch.autograd.Variable(torch.ones(batch).cuda()) * 0.9999)
cos = torch.max(cos, torch.autograd.Variable(torch.ones(batch).cuda()) * -0.9999 )

then training ran but after few iterations I got a constant error and the same for all models
Then I realized their is a problem as teta value can be negative and in model definition we have
def compute_geodesic_loss(self, gt_r_matrix, out_r_matrix):
theta = tools.compute_geodesic_distance_from_two_matrices(gt_r_matrix, out_r_matrix)
error = theta.mean()
return error
so if I am not wrong error can be negative

I tried
In compute_geodesic_distance_from_two_matrices(m1, m2)
at the end : return torch.abs( teta)

I now have a steady decrease of geodesic loss for all the models :)
after 100000 iterations
ortho6d 0.018
ortho5d 0.021
quaternion 0.162
euler 0.198
rodriguez 0.104
euler_sin_cos 0.066
Quaternion_half 0.041

errors in sanitycheck train.py

Hi,

I am very interested in your work and try to reproduce it

I notice these errors in sanity_test/code/train.py

print ("################TEST ON Rodriguez-vectors, input=r_matrix, loss=geodesic#####################")
model_rmg = Model(is_linear=False, out_rotation_mode="Rodriguez-vectors")
train(model_emg, input_mode = "r_matrix", loss_mode="geodesic", sampling_method="quaternion",batch=64 , total_iter=500001, out_weight_folder=out_weight_folder+"rmg/")

model should be model_rmg

print ("################TEST ON euler_sin_cos, input=r_matrix, loss=geodesic#####################")
model_escmg = Model(is_linear=False, out_rotation_mode="euler_sin_cos")
train(model_emg, input_mode = "r_matrix", loss_mode="geodesic", sampling_method="axis_angle",batch=64 , total_iter=500001, out_weight_folder=out_weight_folder+"escmg/")

model should be model_escmg

print ("################TEST ON Quaternion_half, input=r_matrix, loss=geodesic#####################")
model_qhmp = Model(is_linear=False, out_rotation_mode="Quaternion_half")
train(model_emg, input_mode = "r_matrix", loss_mode="pose", sampling_method="quaternion",batch=64 , total_iter=500001, out_weight_folder=out_weight_folder+"qhmp/")
model_qhmg = Model(is_linear=False, out_rotation_mode="Quaternion_half")
train(model_emg, input_mode = "r_matrix", loss_mode="geodesic", sampling_method="quaternion",batch=64 , total_iter=500001, out_weight_folder=out_weight_folder+"qhmg/")

model should be model_qhmg

I run train.py after these modifications
training with loss = pose
ran without problem

for all the cases with loss = geodesic after fzx iterations I got loss is NaN

please see issue with geodesic loss

standard.bvh

Hi,
how to generate the standard.bvh when I use the Human36m dataset

About the function that transforms your 6D Representation to Rotation Matrix

Hi again! May I ask you something more about your project code?

The function you are using to Transform your 6D output vector from the Last Linear layer to a Rotation Matrix that belongs in S0(3) assumes that the 6D output vector is consisted by 2 3D vectors that are orthogonal and doesn't use the Gramm-Schmidt-like process you are describing in your paper for the second column. Is it a bug or am I missing something and you have a reason to assume this orthogonality?

Thanks in advance :)

Missing configuration file

Hi,

Thank you for sharing your code. I would be interested in trying to reproduce your experiments, but it seems that configuration files are missing at least for ShapeNet, e.g.:

param.read_config("../train/test0306_plane_ortho5d/test0306_plane_ortho5d.config")

Would it be possible for you to upload these, or at least to describe the variants you experimented?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.