Eigen::Matrix<double,4,4> local, inv;
Eigen::Matrix<double,3,1> y = (J - I).normalized();
Eigen::Matrix<double,3,1> x = y.cross(K - I).normalized();
local << x, y, x.cross(y), (J + I) / 2.0, 0.0, 0.0, 0.0, 1.0;
inv = Eigen::Matrix<double,4,4>::Identity();
// I Don't use Inverse as I assume the following code is faster
Eigen::Matrix<double,3,3> Rinv = local.block(0,0,3,3).transpose();
inv.block(0,0,3,3) = Rinv;
inv.block(0,3,3,1) = -(Rinv * local.block(0,3,3,1));
double d = (I - J).norm(); // cannot be 0
double theta = std::asin(offset / d) * 2.0;
rot << std::cos(theta), -1.0 * std::sin(theta), std::sin(theta), std::cos(theta);
foo << I, 1.0;
foo = inv * foo; // Don't inline this code in the next line. There is an issue between Eigen 3.1.2 and Clang.
foo << 0.0, rot * foo.block(1,0,2,1), 1.0;
*JC = (local * foo).block(0,0,3,1);
// End code
This code is a direct translation of old Eigen2 code. So I don't know that some "homogeneous()" method exist. I should check that.
I reused the variable here to limit the amount of memory needed. I know it is only a 3x1 vector, but I tried to apply this everywhere in my code. We work with quit big time series data, so the less amount of memory we used, the best it is. In this idea also, I tried to reduce the number of temporary variable.
I know I can use the methods "tail", or "segment" instead of "block", but the difference is only in the simplicity of writting the code. There is no optimisation in these two methods compared to "block", is there?
Regarding, the use of Transform. Long time ago (i..e. more than three years), I contacted Gael on IRC about the possibility to compute a 4x4 matrix with 3x1. At that time, it was not possible, but it seems It is now. Let me check how it works.
So It seems I have to rewrite all of this code :-)
Have you also some other advice to better write this code? This only a fery small subset of all the function used to compute data in a biomechanical model.
Thanks a lot,