Abstract:
Supersparse algorithms (see the talks by Gilbert,
Tropp, Zou on Tuesday in this workshop) rely on
randomness, both in the construction of the
"measurement matrix" and in the algorithm that
reconstructs the sparse underlying vector from the
measurements, and they prove that the desired sparse
decomposition or approximation is obtained with high
probability. Donoho, Candés & Tao, Rudelson &
Vershynin (see the talks on Monday in this workshop)
have shown that when l^{1}minimization algorithms are
used (which are not super fast), the measurement
matrix need not be random: they prove that there
exist (increasingly
frequently among random choices as the dimension N
increases) K x N matrices H, with K < N, such that
arbitrary Msparse vectors v (i.e. v Є C^{N} or R^{N},
with #{n;v_{n}≠ 0} ≤ M; M < K, with typically K = O(M
logN)) can be recovered completely from y = H_{v} by
l^{1}  minimization: v = argmin_{u}{║u
║_{l¹};H_{u} = y_{ }}.
However, so far no explicit constructions of such
matrices H are known  the existence arguments are
probabilistic.
In this talk two different approaches will be
sketched that construct explicit measurement
matrices H, using results and techniques from coding
theory. The recovery algorithms are not l^{1}based.
[LECTURE SLIDES]
