|[eigen] suppressing pruning of zeros in results of sparse matrix operations|
[ Thread Index |
| More lists.tuxfamily.org/eigen Archives
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: [eigen] suppressing pruning of zeros in results of sparse matrix operations
- From: Douglas Bates <bates@xxxxxxxxxxxxx>
- Date: Fri, 29 Jul 2011 09:05:04 -0500
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:content-type; bh=tGcZNrvEx4CWcrrmcc2tX1phEGWOezcYupT9qLBEMeQ=; b=Nwzpg4DTQqZF96IYHch+jjG/uLzCo408m+q04OrEkBMR3l1GKqE3aChfJWR8bwA9zB WZy5856YCYR0cDxeqRF5WoQmMFxgTUdHpN9tIenLmgLlpP+3mPC963RwswMe64XOZgmh H6LWJ9aEi95OWKlTAxHgxdngFKNJp2O6iHvQ8=
I am performing a constrained optimization of a function that is
evaluated through certain matrix operations - in particular, one or
more sparse Cholesky factorizations are performed during each
evaluation. The matrix being factored, which is generated from a
sparse matrix multiplication and an transpose operation, has the same
pattern of non-zeros throughout, hence the analysis to evaluate a
fill-reducing permutation is done only once and only the numeric
factorization is done during the evaluation of the objective function.
However, I have encountered a glitch. When the function being
optimized is evaluated on the boundary of the parameter space, some of
the "non-zero" elements in one of these matrices are numerically zero.
This matrix is transposed and multiplied by another sparse matrix. I
haven't figured out exactly where the pruning occurs but the result of
A.adjoint() * B has fewer non-zeros in this case than in the
off-boundary cases. The factorization, either using the methods in
SimplicialCholesky.h or via Cholmod through CholmodSupport.h produces
incorrect results and everything falls apart after than.
Is it likely that pruning is taking place in the A.adjoint() * B
operation? If so, is there a way I can rewrite the calculation to
avoid the pruning or somehow suppress the pruning?
The matrix A is the one that depends on the variables being optimized
and often it contains columns in which only the diagonal element is
potentially nonzero. When the parameter is on the boundary this
diagonal element gets the value zero. What will eventually be
decomposed is of the form
A.adjoint() * B * B.adjoint() * A + I
where I is an identity matrix.