[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: Re: [eigen] complex SVD?
- From: Douglas Bates <bates@xxxxxxxxxxxxx>
- Date: Tue, 20 Dec 2011 15:03:15 -0600
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:content-type :content-transfer-encoding; bh=1Y5eQusRf7kh+dIiI7M4o8GY8Vt1tKqJocyIPuG45Ag=; b=eGMOIRsW0Hpw351fdNO/4RR/lzDJeLYmOufDG2oAE378RFqlJlM5ZmJTc69OKeV5j6 /ZQoWJAnuk5/033QS7YE2Tfvq62OYe75NiRQfv0+Wpp+iSV9ReS0NkVAG+fiS0noWzW/ Rbiimda/vqD+i6eusBrJ0jGuUJy0jimpq+UIc=
On Tue, Dec 20, 2011 at 2:01 PM, Benoit Jacob <jacob.benoit.1@xxxxxxxxx> wrote:
> Eigen's JacobiSVD does two-sided Jacobi rotations, which is a slow but
> very accurate svd algorithm. That's why it's a lot slower than LAPACK.
>
> Note that Intel recently contributed a MKL/BLAS/LAPACK backend, so why
> don't you use that. Gael could confirm whether the Intel LAPACK
> backend is fully generic LAPACK compatible.
>
> changeset: 4410:a1f843f013c9
> user: karturov@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> date: Mon Dec 05 14:52:21 2011 +0700
> summary: Intel(R) MKL support added.
>
> Documentation is here:
>
> http://eigen.tuxfamily.org/dox-devel/TopicUsingIntelMKL.html
>
> Benoit
The documentation you reference indicates that the MKL additions use
?gesvd for the SVD calculation. Mark was referring to ?gesdd which, I
believe, is faster than ?gesvd
> 2011/12/20 Mark Borgerding <mark@xxxxxxxxxxxxxx>:
>> Last night when I thought Eigen couldn't handle complex SVD (based on
>> apparently outdated comments in SVD and JacobiSvd.h),
>> I held my nose and cra**ed out some C++ code to use LAPACK's CGESDD function
>> for complex<float> SVD.
>> The lapack routines are all in Fortran with no C/C++ declarations and thus
>> have some ugly calling conventions
>>
>> extern "C" {
>> typedef void* fcomplex_ptr;
>> void cgesdd_( const char* jobz, const int* m,const int* n, fcomplex_ptr
>> a,
>> const int* lda, float* s, fcomplex_ptr u,const int* ldu,
>> fcomplex_ptr vt,const int* ldvt,
>> fcomplex_ptr work, const int* lwork, float* rwork, int* iwork,
>> int* info );
>> #define cgesdd cgesdd_
>> }
>> ... lots of initialization ...
>> cgesdd( "S", &m, &n, &_a[0], &lda, &_s[0], &_u[0], &ldu, &_vt[0], &ldvt,
>> &_work[0], &lwork, &_rwork[0], &_iwork[0], &info );
>> ...
>>
>>
>> The Good News:
>>
>> This morning I saw Gael's note and was overjoyed that I could ditch the ugly
>> code in favor of much more elegant Eigen code:
>>
>> _svd = JacobiSVD<typename
>> MatType::PlainObject>(_rows,_cols,ComputeThinU|ComputeThinV);
>> ...
>> memcpy(_A.data(),inblock.data(),inblock.bytes() );
>> _svd.compute(_A);
>> _singVals = _svd.singularValues();
>>
>>
>> The Bad News:
>>
>> The elegant code consistently runs much slower than LAPACK. :(
>> E.g. ~5x longer to do a 128x1000 complex<float> SVD (see system details
>> below)
>>
>> Both are computing full set of singular values and the thinned subset of the
>> left&right singular vectors -- (This is the "S" argument for cgesdd;
>> ComputeThinU|ComputeThinV for JacobiSvd) .
>>
>> Both are initialized outside the loop and then called several times.
>> (calling cgesdd with lwork=-1 to find size, reusing JacobiSvd object )
>>
>> I double checked that Eigen is using SIMD ( i.e. lots of code in the
>> assembly like 'mulps %xmm7,%xmm6')
>>
>> EIGEN_NO_DEBUG is defined
>>
>> Is there anything else I could be doing wrong that would cause such a big
>> speed gap?
>>
>> -- Mark
>>
>> P.S.
>> System details:
>> ubuntu 11.10 x86_64
>> Core i7 CPU Q 820 @ 1.73GHz
>> g++ 4.6.1
>> eigen 3.0.3
>> liblapack-dev 3.3.1-1
>>
>>
>>
>>
>> On 12/20/2011 02:45 AM, Gael Guennebaud wrote:
>>>
>>> JacobiSVD does work for complex matrices.
>>>
>>> gael
>>>
>>> On Tue, Dec 20, 2011 at 3:08 AM, Mark Borgerding<mark@xxxxxxxxxxxxxx>
>>> wrote:
>>>>
>>>> Is there an SVD decomposition for complex matrices?
>>
>>
>>
>
>