|Re: [eigen] Design for integrating different decomposition approaches|
[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] Design for integrating different decomposition approaches
- From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx>
- Date: Wed, 13 May 2009 04:13:18 +0200
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=rR5noNr+NB9CU9nQt37qItYbhkizvotezjkv63KcSqE=; b=k/YoCZAW/cSLZ4drrHiQN/pUez3sAHfkob29dzvv9rYny9EG09suzVt09H3w7tgCGQ JEoCVIXl915LkjucbHRMAKuWT8WGmhAqSyL/ndArbByxNNrp552Gl6e/d1xPym2FOPZY ImJJ/2ogYveIV89sWAHm07eNyxXS5vKTD2W8s=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=nHpGvx52cQywdlsINuLc20veu9A6liL2f6fPAa1lzBctIQFIA6PjZxP+o4nVtdEk5H uLxtGq/neOUSQI7GBk1ny8N+PAISRYr81KcZae5Wl8i9/ScaWZdtApzGobQKAX/qeQA6 +KUCGrAfe9wdwVKqyfcF1e9fWddnqGiJyQrL0=
OK, I was feeling bad for generating stop-energy instead of coding, so
I readied the partial pivoting LU decomposition. It's commited
(r967249), the current name is PartialLU but that is open for
discussion (rationale: almost as short as FastLU but a bit more
descriptive). It's restricted to only square matrices (that's guarded
by an assert) and furthermore, only invertible matrices (using it on
an invertible matrix will give nice FP overflows). This allowed to
simplify the code a lot, so if it doesn't give a noticeable speed
optimization then at least it'll be a code size optimization, both
binary and source code.
I made inverse() and determinant() use it instead of full-pivoting LU,
so it's quite well tested already, but i'll still add a unit test
And yes, it's much faster than full-pivoting: here, for a 1000x1000
float matrix, computing the inverse() takes 2.0s now with PartialLU,
while it takes 3.0s with LU.
If instead we take only the decomposition without computing the
inverse, which is representative of what happens if we want to solve
only for 1 single vector, then the times are: 0.8s with PartialLU
versus 1.8s with LU. (The more-than-2x factor here is explained by the
fact that the full-pivoting has a big part that's not currently
vectorized, namely the pivot search -- so there's room for
Anyway, now we have a more firm basis to discuss the LU API ;)
2009/5/12, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
> Just wanted to add that I don't want to sound too negative, we might
> well end up doing as you propose, I just want to throw the other side
> of the argument into the balance!
> 2009/5/12, Benoit Jacob <jacob.benoit.1@xxxxxxxxx>:
>> Thanks for your ideas,
>> 2009/5/12, Hauke Heibel <hauke.heibel@xxxxxxxxxxxxxx>:
>>> You could simply add a method in which you chose the type of
>>> pivoting. This is probably not the best way to do it since the LU
>>> interface might change or become invalid depending on the underlying
>>> algorithm. What do you want for instance with two permutation
>>> matrices, when only partial pivoting is performed?
>> Yep, so indeed, at least at the level of implementation, LU and
>> partial-pivoting-LU are two separate classes.
>>> What if you could write
>>> LU<MatrixXf> lu_full;
>>> and have the well known Eigen interface of the LU decomposition
>>> available but on top you could write as well
>>> LU<MatrixXf, Algorithms::PartialPivoting> lu_partial;
>> This isn't more concise than:
>> PartialPivotingLU<MatrixXf> lu_partial;
>> and we haven't even considered the case where we're outside of
>> namespace Eigen and thus you'd need Eigen:: twice.
>> Actually I was considering naming the partial pivoting LU just
>> "FastLU", what's your opinion? The idea is that it's faster but
>> assumes you know what you are doing (it's only valid, basically, for
>> invertible square matrices, as I wasn't going to bother about the
>> rectangular case here which will only work either for m>=n or m<=n but
>> not both).
>> I'm also a bit reluctant about us taking the habit of exposing
>> algorithm names as part of the API. Here it makes sense because they
>> are really different decompositions with different APIs but in other
>> cases, if the APIs are identical, better keep the choice of algorithm
>> a private thing so we're free to switch to a better algorithm in the
>>> Both cases were using the same concise naming and it were clear to the
>>> user that both times she/he were dealing with an LU decomposition
>>> while offering the correct interface.
>> I'm not sure about this one: to me, it kind of suggests that LU<> is
>> the same class in both cases, hence in particular offers the same API,
>> which isn't going to be the case (partialpivoting won't have rank()
>> for example).