[eigen] Advanced vectorization |
[ Thread Index |
Date Index
| More lists.tuxfamily.org/eigen Archives
]
- To: eigen@xxxxxxxxxxxxxxxxxxx
- Subject: [eigen] Advanced vectorization
- From: Márton Danóczy <marton78@xxxxxxxxx>
- Date: Mon, 6 Jun 2011 11:23:04 +0200
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:from:date:message-id:subject:to :content-type; bh=e4lykHma7L2u2pxOIJSRsPjmV5pD3PnLZn/JhYoIGHA=; b=xegRn891CWHvVnKmqxmdWwzAv/ZJ+LJfS9SwbIUHR5J0m59qtdNEqKiftODSP4coS0 7KNQMnO9VxwTD8Z89j3NwLvR8IjfUcjnxs4EamvQu80eJkftTrx7ylky9PGs/H7EVbdl PqbnovK/ybU944KQJSPDeQNedwqCD//DMOZcI=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:from:date:message-id:subject:to:content-type; b=GFv6xR1PeKzChYu+f6WpJijdwHWXfC6DVnExc8NrEYDd4KfhJdtIiaLafSsImSclvF gUS5fbNs6hI2KCG3Zb7EuhLXkmrSCRU5SQCZtw5jBz2arGQ8iwqz+jRzDJxc7P80UJhy SuTfERN/oBdO/9hYvZtIYxYrK5Hox5PbHfXgM=
Hi all,
I'm trying to optimize the evaluation of a squared loss function and
its gradient. That is, I'd like to calculate
L = 0.5 ||Ax-b||^2
dL/dx = A'(Ax-b)
where A has more columns than rows, i.e. A'A would be huge and caching
it would be infeasible.
Right now, i have the following routine:
Scalar objfunc(const Matrix<Scalar, Dynamic, 1>& x, Matrix<Scalar,
Dynamic, 1>& g)
{
e.noalias() = A * x - b;
g.noalias() = A.adjoint() * e;
return Scalar(0.5) * e.squaredNorm();
}
where /e/ is pre-allocated as a class member. When hand coding this,
instead of storing /e/, I would calculate it component-wise and
accumulate its squared norm along the way, thus avoiding iterating
twice. Is there a clever way to accomplish this with Eigen?
Thanks,
Marton