[eigen] LU precision tuning |
[ Thread Index | Date Index | More lists.tuxfamily.org/eigen Archives ]
Hi, This is mostly for Hauke but the attached file biscuitlu is worthy of being archived here, so here goes... In decompositions such as LU, we need to make decisions when to consider something as zero, and this depends on an arbitrary cutoff that can only be tuned by experiment. In LLt, we were lucky to have Keir who found a research paper telling us the optimal cutoff, but there doesn't seem to be anything like that for LU. So: - attached patch difflu allows LU to override the default precision level - attached patch biscuitlu tests the precision of LU::rank() for matrix sizes from 100 to 1000, for various precisions. This should allow us to infer the formula for the maximum precision we can use to allow for exact rank determination. Once this range of values is known, the next step will be to find the optimal 'precision" value to get optimally precise LU::solve(). So if you want to help, this 2nd program remains to be written ! Benoit
Attachment:
difflu
Description: Binary data
#include <Eigen/LU> #include <Eigen/Array> #include <ctime> using namespace Eigen; using namespace std; const int repeat = 10; template<typename Derived> void doSomeRankPreservingOperations(Eigen::MatrixBase<Derived>& m) { typedef typename Derived::RealScalar RealScalar; for(int a = 0; a < 3*(m.rows()+m.cols()); a++) { RealScalar d = Eigen::ei_random<RealScalar>(-1,1); int i = Eigen::ei_random<int>(0,m.rows()-1); // i is a random row number int j; do { j = Eigen::ei_random<int>(0,m.rows()-1); } while (i==j); // j is another one (must be different) m.row(i) += d * m.row(j); i = Eigen::ei_random<int>(0,m.cols()-1); // i is a random column number do { j = Eigen::ei_random<int>(0,m.cols()-1); } while (i==j); // j is another one (must be different) m.col(i) += d * m.col(j); } } template<typename Scalar> float test_lu_rank_one_size_one_precision(int size, typename NumTraits<Scalar>::Real precision) { float result = 0; for(int i = 0; i < size; i++) { int rank = ei_random<int>(1, size-1); typedef Matrix<Scalar,Dynamic,Dynamic> Mat; Mat m = Mat::Random(size,size); for(int i = rank; i < size; i++) m.row(i).setZero(); doSomeRankPreservingOperations(m); LU<Mat> lu(m, precision); result += float(abs(rank - lu.rank())) / size; } return result; } template<typename Scalar> void test_lu_rank() { typedef typename NumTraits<Scalar>::Real RealScalar; #ifdef __GNUC__ cout << "starting " << __PRETTY_FUNCTION__ << endl; #else cout << "starting " << __func__ << endl; #endif for(int size = 100; size <= 1000; size += 100) { for(RealScalar precision = machine_epsilon<RealScalar>(); precision < sqrt(machine_epsilon<RealScalar>()); precision *= 2) { float result = test_lu_rank_one_size_one_precision<Scalar>(size, precision); cout << "size " << size << ", precision " << precision << ", average relative error in rank: " << result << endl; } } } int main() { srand((unsigned int) time(NULL)); cout << "repeating each test " << repeat << " times" << endl; test_lu_rank<float>(); test_lu_rank<double>(); test_lu_rank<complex<float> >(); test_lu_rank<complex<double> >(); }
Mail converted by MHonArc 2.6.19+ | http://listengine.tuxfamily.org/ |