Re: [eigen] Incorrect result of multiplication with scalar (mingw gcc 4.5 x64)

[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]


2010/8/26 Eamon Nerbonne <eamon.nerbonne@xxxxxxxxx>:
> I normally compile with a variety of other flags and both with and without
> eigen's vectorization - they all behave identically, so I doubt compiler
> flags matter at all.

I confirm the bug with Eigen3 with all the versions of GCC that i've
tried: 4.5, 4.3, even 3.4.

This suggests a Eigen bug.

Gael: this is probably an issue in the products code / blas traits.


>
> And independant issue is that MINGW's has some bugs with respect to compiler
> intrinsics and in particular <windows.h> must be included before various
> intrinsics header are included, which is why I leave bench/BenchTimer.h in
> there and include it before Eigen/Core.

Good to know; we could make that work by most people by letting
Eigen/Core include <windows.h> before the intrinsics header, when
compiling on MinGW.
Can you please make a patch?

Benoit

>
> --eamon@xxxxxxxxxxxx - Tel#:+31-6-15142163
>
>
> On Thu, Aug 26, 2010 at 16:08, Eamon Nerbonne <eamon.nerbonne@xxxxxxxxx>
> wrote:
>>
>> On Thu, Aug 26, 2010 at 15:22, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
>> wrote:
>>>
>>> Hi,
>>>
>>> Since we're discussing a possible compiler bug here, can you please
>>> make a compilable testcase and give us your complete g++ command line?
>>>
>>> Benoit
>>>
>>> 2010/8/26 Eamon Nerbonne <eamon.nerbonne@xxxxxxxxx>:
>>> > Hi all,
>>> >
>>> > I stumbled across a compiler-specific bug (Current Hg tip version 'fix
>>> > bad
>>> > "using typename"'):
>>> >
>>> >     static MatrixXd Covariance(Eigen::MatrixBase<MatrixXd> const &
>>> > points,
>>> > VectorXd const & mean) {
>>> >         return (points.colwise() - mean) * (points.colwise() -
>>> > mean).transpose() * (1.0/(points.cols()-1.0));
>>> >     }
>>> >
>>> >     static MatrixXd CovarianceB(Eigen::MatrixBase<MatrixXd>const &
>>> > points,
>>> > VectorXd const & mean) {
>>> >         VectorXd diff = VectorXd::Zero(points.rows());
>>> >         MatrixXd cov = MatrixXd::Zero(points.rows(),points.rows());
>>> >         for(int i=0;i<points.cols();++i) {
>>> >             diff.noalias() = points.col(i) - mean;
>>> >             cov.noalias() += diff * diff.transpose();
>>> >         }
>>> >         return cov * (1.0/(points.cols()-1.0));
>>> >     }
>>> >
>>> > These two methods return approximately equal results in MSC but not in
>>> > GCC;
>>> > turns out the multiplication by the scalar "1.0/(points.cols()-1.0" is
>>> > ignored by GCC in the first example.
>>> >
>>> > Rearranging the order of the multiplication (i.e. putting the scalar
>>> > factor
>>> > in front or in the middle) provides a workaround at no performance
>>> > cost, as
>>> > does placing the result into an intermediate variable with the
>>> > assignment
>>> > operator rather than returning it directly (with perf cost).  On the
>>> > other
>>> > hand, this is also broken:
>>> >
>>> >     static MatrixXd Covariance(Eigen::MatrixBase<MatrixXd> const &
>>> > points,
>>> > VectorXd const & mean) {
>>> >         return  (1.0/(points.cols()-1.0)) *  ((points..colwise() - mean)
>>> > *
>>> > (points.colwise() - mean).transpose());
>>> >     }
>>> >
>>> > I'm guessing somewhere some specialization of GeneralProduct is wrong,
>>> > but
>>> > as to why this is compiler specific or what exactly the problem is, I
>>> > don't
>>> > know.
>>> >
>>> > --eamon@xxxxxxxxxxxx - Tel#:+31-6-15142163
>>> >
>>>
>>>
>>
>>
>> //A quick test-case
>> //Compile with g++ -DSTANDALONE -I %EIGEN3_DIR% -march=native
>> CovarianceStandalone.cpp && a
>> //or appropriately adapted...
>> #include <bench/BenchTimer.h>
>> #include <Eigen/Core>
>> using namespace Eigen;
>>
>> static MatrixXd Covariance(Eigen::MatrixBase<MatrixXd> const & points,
>> VectorXd const & mean) {
>>     return (points.colwise() - mean) * (points.colwise() -
>> mean).transpose() * (1.0/(points.cols()-1.0))  ;
>> }
>>
>> static MatrixXd CovarianceB(Eigen::MatrixBase<MatrixXd>const & points,
>> VectorXd const & mean) {
>>     VectorXd diff = VectorXd::Zero(points.rows());
>>     MatrixXd cov = MatrixXd::Zero(points.rows(),points.rows());
>>     for(int i=0;i<points.cols();++i) {
>>         diff.noalias() = points.col(i) - mean;
>>         cov.noalias() += diff * diff.transpose();
>>     }
>>     return cov * (1.0/(points.cols()-1.0));
>> }
>>
>> static MatrixXd CovarianceC(Eigen::MatrixBase<MatrixXd> const & points,
>> VectorXd const & mean) {
>>     return (points.colwise() - mean) * (1.0/(points.cols()-1.0)) *
>> (points.colwise() - mean).transpose()   ;
>> }
>>
>>
>> bool testIt() {
>>     MatrixXd points = Eigen::MatrixXd::Random(7,1001);
>>     VectorXd mean = points.rowwise().sum() * (1.0/points.cols());
>>     return Covariance(points,mean).isApprox(CovarianceB(points,mean));
>> }
>>
>> bool testIt2() {
>>     MatrixXd points = Eigen::MatrixXd::Random(7,1001);
>>     VectorXd mean = points.rowwise().sum() * (1.0/points.cols());
>>     return Covariance(points,mean).isApprox(CovarianceC(points,mean));
>> }
>>
>> bool testIt3() {
>>     MatrixXd points = Eigen::MatrixXd::Random(7,1001);
>>     VectorXd mean = points.rowwise().sum() * (1.0/points.cols());
>>     return CovarianceB(points,mean).isApprox(CovarianceC(points,mean));
>> }
>>
>>
>>
>> #ifndef STANDALONE
>>
>> #define BOOST_TEST_INCLUDED
>> #include <boost/test/unit_test.hpp>
>>
>> BOOST_AUTO_TEST_CASE( covariance_standalone_test ){
>>     BOOST_CHECK(testIt());
>>     BOOST_CHECK(testIt2());
>>     BOOST_CHECK(testIt3());
>> }
>>
>> #else
>> #include <iostream>
>> int main(int argc, char *argv[] ) {
>>     bool isOk =  testIt();
>>     bool isOk2 =  testIt2();
>>     bool isOk3 =  testIt3();
>>     std::cout<<"testIt(): "<< isOk<<"\n";
>>     std::cout<<"testIt2():"<< isOk2<<"\n";
>>     std::cout<<"testIt3():"<< isOk3<<"\n";
>>     if(!isOk || !isOk2 || !isOk3) {
>>         std::cout << "Bad output!\n";
>>         return 1;
>>     }
>>
>>     return 0;
>> }
>>
>> #endif
>>
>
>



Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/