SlideShare una empresa de Scribd logo
1 de 33
Multi-Dimensional Subspace-Based
Parameter Estimation and Prewhitening
Stefanie Schwarz
Bachelor’s Thesis
Munich University of Technology
Institute for Circuit Theory and Signal Processing
Univ.-Prof. Dr.techn. Josef A. Nossek
Date of Start: 01/12/2011
Date of Examination: 26/03/2012
Supervisors: M.Sc. Qing Bai (Munich University of Technology),
Prof. Dr.-Ing. Jo˜ao Paulo C. L. da Costa (Universidade de Bras´ılia)
Theresienstr. 90
80290 Munich
Germany
26/03/2012
Contents
1. Introduction 8
2. Tensor Calculus 10
2.1 r-Mode Unfolding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 r-Mode Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Subspace-based Decomposition of Tensors . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Tensor Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 The Higher-Order SVD (HOSVD) . . . . . . . . . . . . . . . . . . . . . . 13
2.3.3 PARAFAC decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3. Data Model 16
3.1 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Tensor Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4. R-D Parameter Estimation 19
4.1 R-D Standard ESPRIT (R-D SE) . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.2 R-D Standard Tensor-ESPRIT (R-D STE) . . . . . . . . . . . . . . . . . . . . . . 20
4.3 Closed-Form PARAFAC based Parameter Estimation (CFP-PE) . . . . . . . . . . 20
5. R-D Prewhitening 21
5.1 Sequential GSVD (S-GSVD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.1.1 Prewhitening Correlation Factor Estimation (PCFE) . . . . . . . . . . . . 22
5.1.2 Tensor Prewhitening Scheme: S-GSVD . . . . . . . . . . . . . . . . . . . 22
5.2 Iterative Sequential GSVD (I-S-GSVD) . . . . . . . . . . . . . . . . . . . . . . . 23
6. Simulation Results 25
6.1 White Noise Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2 Colored Noise Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7. Conclusions 30
Appendix 31
A1 The Kronecker product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Bibliography 32
3
List of Figures
1.1 MIMO multipath scenario with 2×2 antenna arrays on the transmitter and receiver
side. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Examples and notation for a scalar, vector, matrix and order-3 tensor. . . . . . . . . 10
2.2 Unfoldings of a 4 × 5 × 3-tensor. Left: the 1-mode vectors, center: the 2- mode
vectors, right: the 3-mode vectors which are then used as columns of the corre-
sponding matrix unfolding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 n-mode products of an order-3 tensor. Left: the 1-mode product, center: the 2-
mode product, right: the 3-mode product. . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Full SVD, economy-size SVD and low-rank approximation of matrix A ∈ C5×4
with rank ρ = 3 and model order d = 2. . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Core tensor of an order-3 tensor with n-ranks ρ1, ρ2, and ρ3. Only the first ρ1 ×
ρ2 × ρ3 elements indicated in blue are non-zero. . . . . . . . . . . . . . . . . . . . 13
2.6 Illustration of PARAFAC decomposition for a 3-way tensor. Above: representation
as a sum of rank-one tensors; below: r-mode products based decomposition. . . . . 14
3.1 2-dimensional outer-product based array (OPA) of size 3 × 3. . . . . . . . . . . . . 17
4.1 R-D Standard ESPRIT (R-D SE), R-D Standard Tensor-ESPRIT (R-D STE) and
Closed-Form PARAFAC based Parameter Estimation (CFP-PE). . . . . . . . . . . 19
5.1 Basic steps of S-GSVD prewhitening scheme with Prewhitening Correlation Fac-
tor Estimation (PCFE). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Basic steps of I-S-GSVD iterative prewhitening scheme. . . . . . . . . . . . . . . 24
6.1 RMSE vs. SNR for the white noise case for L = 50 runs. . . . . . . . . . . . . . . 26
6.2 RMSE vs. Iterations with SNR= 15dB, correlation coefficient ρ = 0.9 and L = 20
runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6.3 RMSE vs. SNR with correlation coefficient ρ = 0.9, K = 4 iterations and L = 20
runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.4 RMSE vs. Correlation coefficient with SNR= 20dB, K = 4 iterations and L = 20
runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6.5 RMSE vs. Array Spacing Variance with SNR= 40dB, correlation coefficient ρ =
0.9, K = 4 iterations and L = 15 runs. . . . . . . . . . . . . . . . . . . . . . . . . 29
4
Acknowledgements
I would like to express my sincerest gratitude to Prof. Dr.-Ing. Jo˜ao Paulo Carvalho Lustosa
da Costa, adjunct professor at Universidade de Bras´ılia (UnB), Brazil, for having given me the op-
portunity to work on this interesting topic under his supervision. His bright ideas and professional
guidance regarding my thesis, along with his invaluable support in every day issues have made this
work possible and made my stay in Bras´ılia unforgettable.
I am also very thankful for the funding from the German Academic Exchange Service (DAAD)
through the RISE weltweit programme, which has enabled my internship at UnB.
Finally, I would like to thank M.Sc. Qing Bai and Univ.-Prof. Dr.techn. Josef A. Nossek from
the Institute for Circuit Theory and Signal Processing at Technical University of Munich (TUM)
for the acceptance of this thesis and good cooperation.
Abstract
High-resolution parameter estimation is a research field that has gained considerable attention in
the past decades. A typical application is in MIMO channel measurements, where parameters such
as direction-of-arrival (DOA), direction-of-departure (DOD), path delay and Doppler spread are
desired to be extracted from the measured signal.
Recently, subspace-based parameter estimation techniques have been improved by taking ad-
vantage of the multi-dimensional structure inherent in the measurement signal. This is accom-
plished by adopting subspace-based decompositions using tensor calculus, i.e., higher-dimensional
matrices. State-of-the-art tensor-based decompositions include Higher-Order Singular Value De-
composition (HOSVD) low-rank approximation and Closed-Form Parallel Factor Analysis (CFP).
The former served as the basis for the Standard Tensor-ESPRIT (STE) and the latter laid the foun-
dation for CFP based parameter estimation scheme (CFP-PE), which are both presented in the first
part of this thesis. The latter technique is appealing since it is applicable to mixed arbitrary arrays
and outer product based arrays.
The second part of this thesis investigates the case that the parameter estimation is subject to the
presence of colored noise or interference, which can severely deteriorate the estimation accuracy.
In order to avoid this, tensor-based prewhitening techniques are applied which exploit the Kro-
necker structure of the noise correlation matrices. Assuming that estimates of the noise covariance
factors are available, e.g., through a noise-only measurement, the estimation accuracy can be sig-
nificantly improved by using Sequential Generalized Singular Value Decomposition (S-GSVD).
In case the noise covariance information is unknown, Iterative Sequential Generalized Singular
Value Decomposition (I-S-GSVD) can successfully be applied. These tensor-based prewhiten-
ing techniques, S-GSVD and I-S-GSVD, can each be combined with the above-mentioned multi-
dimensional HOSVD and CFP based parameter estimation schemes.
As a novelty in this thesis, the I-S-GSVD prewhitening in conjunction with CFP based param-
eter estimation is proposed. In this way, the advantages of both techniques are joined, that is, the
suitability of I-S-GSVD for data contaminated with colored noise without knowledge of the noise
covariance, and the applicability of CFP to mixed array geometries and the robustness to arrays
with positioning errors.
1. Introduction
High-resolution parameter estimation involves the extraction of relevant parameters from a set of
R-dimensional (R-D) data measured by an antenna array. In the field of MIMO channel sound-
ing, the considered dimensions of the measured data can correspond to time, frequency, or spa-
tial dimensions, i.e., the measurements captured by one- or two-dimensional antenna arrays at
the transmitter and the receiver. The estimated parameters include direction-of-arrival (DOA),
direction-of-departure (DOD), Doppler spread, or path delay. In this context, the desired param-
eters are also called spatial frequencies. A typical multipath scenario with 2 × 2 antenna arrays
at the transmitter and receiver side is illustrated in Figure 1.1. Other applications of parameter
estimation are manifold, reaching from radar and sonar to biomedical imaging and seismology.
TX RX
Direction-of-
arrival (DOA)
Direction-of-
departure (DOD)
Fig. 1.1. MIMO multipath scenario with 2 × 2 antenna arrays on the transmitter and receiver side.
A wide class of efficient parameter estimation schemes using subspace decomposition are
based on Standard ESPRIT (SE) [1], which exploits the symmetries present in a one-dimensional
antenna array. A generalized scheme which makes Standard ESPRIT applicable to multi-
dimensional measurements is referred to as R-D Standard ESPRIT (R-D SE) [2], in which the
R-dimensional data is unfolded into a matrix via a stacking operation. Obviously, this represen-
tation sees the problem from just one perspective, i.e., one projection, and neglects the R-D grid
structure inherent in the data. Consequently, parameters cannot be estimated properly when signals
are not resolvable in certain dimension. A possibility to keep the multi-dimensional structure is to
express the estimation problem using higher-dimensional matrices, so-called tensors. By consid-
ering all dimensions as a whole, it is possible to estimate parameters even if they are not resolvable
for each dimension separately, and the resolution, accuracy, and robustness can be improved.
Tensor-based parameter estimation schemes have gained attention in the past few years and
are presented in the first part of this thesis. Tensor-based extensions of the ESPRIT scheme have
been developed recently, namely Standard Tensor-ESPRIT (STE) and Unitary Tensor-ESPRIT
(UTE) [2], which utilize a tensor extension of the Singular Value Decomposition (SVD), the so-
called Higher-Order SVD (HOSVD) [3]. However, one harsh constraint on ESPRIT schemes
8
1. Introduction 9
is imposed by the shift-invariance property, which stipulates that the antenna array must have a
specific symmetric lattice structure. Positioning errors in real antenna arrays, for example, lead
to a violation of this constraint. Schemes based on Parallel Factor Analysis (PARAFAC) analysis,
a tool rooted in psychometrics [4], do not require the shift-invariance property, as they can be
applied to arbitrary array geometries. There exist iterative solutions for PARAFAC decomposition
such as Alternating Least Squares (ALS) [5], which we do not consider in this thesis in favour
of the closed-form PARAFAC (CFP) [6] solution. Based on this closed form scheme, the closed-
form PARAFAC based Parameter Estimation scheme (CFP-PE) [7] was proposed, which delivers
accurate estimates for arbitrary arrays and is robust against positioning errors.
The second part of this thesis is dedicated to prewhitening schemes that mitigate the effect of
multi-dimensional colored noise or interference present at the receiver and/or transmitter antennas.
Since the colored noise affects more the signal component, its presence can severely deteriorate the
estimation accuracy. Prewhitening aims to distribute the noise power evenly across the noise space
to improve the estimation accuracy. Moreover, the presented schemes assume that the colored
noise has a Kronecker structure, as can be found in certain EEG [8] and MIMO applications [9],
where the noise covariance matrix is taken to be the Kronecker product of the temporal and spatial
correlation matrices.
A tensor-based prewhitening scheme that exploits the inherent Kronecker structure of the noise
is Sequential Generalized Singular Value Decomposition (S-GSVD), which can be applied if the
second order statistics of the noise are known. This scheme was combined with subspace decom-
positions via HOSVD [10] and closed-form PARAFAC [11]. Both combinations have an improved
accuracy over matrix based prewhitening schemes, as well as high computational efficiency.
The iterative counterpart of the above prewhitening scheme (I-S-GSVD) [12] can be used if
noise samples cannot be collected without the presence of the signal, thus hindering a noise statis-
tics estimation. The proposal in this thesis is to combine I-S-GSVD with CFP decomposition. In
this way, one joins the advantages of both techniques, that is, the suitability of I-S-GSVD for data
contaminated with colored noise without knowledge of noise statistics, and the applicability of
CFP to mixed array geometries as well as the robustness to arrays with positioning errors.
The remainder of this thesis is organized as follows. A preliminary introduction to tensor
calculus and subspace decomposition of tensor-shaped data is given in Section 2. The data model
and its tensor notation are presented in Section 3. The basic concepts of the above mentioned
multi-dimensional parameter estimation schemes are explained in Section 4. Efficient tensor-based
prewhitening schemes are discussed in Section 5. Section 6 assesses the performance and accuracy
of the presented methods in MATLAB. Finally, conclusions are drawn in Section 7.
2. Tensor Calculus
The following section aims at familiarizing the reader with fundamental tensor calculus, which
builds the basis for all multi-dimensional parameter estimation and prewhitening techniques pre-
sented in this thesis. The notation is in accordance with [3]. Furthermore, the tensor-extension of
the Singular Value Decomposition (SVD), the so-called Higher-Order SVD, is presented.
In essence, tensors are higher-dimensional matrices. An order-R tensor (also called R-
dimensional or R-way tensor) is denoted by the calligraphic variable
A ∈ CM1×M2×···×MR
, (2.1)
which means that A has Mr complex elements along the dimension (or mode) r for r = 1, . . . , R.
A single tensor element is symbolized by
am1,m2,...,mR
∈ C , ir = 1, . . . , Mr , r = 1, . . . , R . (2.2)
In this sense, an order-0 tensor is a scalar, an order-1 tensor is equivalent to a vector, and an order-2
tensor represents a matrix. Order-3 tensors can be thought of as elements arranged in a cuboid.
Higher-dimensional tensors (R > 3) go beyond graphical imagination, yet are the most natural
way to represent the data sampled from antenna grids, as will be shown later on. An illustrative
explanation together with the notation used in this thesis is shown in Fig. 2.1.
Fig. 2.1. Examples and notation for a scalar, vector, matrix and order-3 tensor.
2.1 r-Mode Unfolding
The r-mode unfolding of a tensor A is denoted as
[A](r) ∈ CMr×(M1·M2·...·Mr−1·Mr+1·...·MR)
(2.3)
10
2.2 r-Mode Product 11
and represents the matrix of r-mode vectors of the tensor A. The r-mode vectors of a tensor are
obtained by varying the r-th index within its range (1, . . . , Mr) and keeping all the other indices
fixed.
In other words, unfolding a tensor means to slice it into vectors along a certain dimension r
and rearrange them as a matrix. As an example, all possible r-mode vectors of an order-3 tensor
of size 4 × 5 × 3 are shown in Fig. 2.2. The order for rearranging the columns is chosen conform
to [3] and indicated by the arrows in the figure.
Fig. 2.2. Unfoldings of a 4 × 5 × 3-tensor. Left: the 1-mode vectors, center: the 2- mode vectors, right: the
3-mode vectors which are then used as columns of the corresponding matrix unfolding.
2.2 r-Mode Product
The r-mode product of a tensor A ∈ CM1×M2×···×MR
and a matrix U ∈ CJr×Mr
along the r-th
mode is denoted as
B = A ×r U ∈ CM1×M2×···×Jr×···×MR
. (2.4)
Note that the number of elements in the r-th dimension of A , Mr, must match the number of
columns in U. The r-mode product is obtained by multiplying all r-mode vectors of A from the
left-hand side by the matrix U. It follows that
[A ×r U](r) = U · [A](r) . (2.5)
Fig. 2.3 shows possible r-mode products of the order-3 tensor A with matrices U1, U2 and U3.
Fig. 2.3. n-mode products of an order-3 tensor. Left: the 1-mode product, center: the 2- mode product,
right: the 3-mode product.
12 2. Tensor Calculus
2.3 Subspace-based Decomposition of Tensors
Since the parameter estimation techniques presented in this thesis are based on the analysis of the
signal subspace, methods for decomposing the subspace of the tensor-shaped measurements are
required. A technique that is commonly applied in conventional matrix-based parameter estimation
methods (e.g., in standard ESPRIT) is the Singular Value Decomposition (SVD). Recall the SVD
of a matrix A ∈ CM×N
, which is defined as
A = UΣV H
, (2.6)
where U ∈ CM×M
and V ∈ CN×N
are unitary matrices and Σ ∈ RM×N
is a pseudo-diagonal
matrix containing the non-negative singular values of A ordered by magnitude. If ρ is the rank of
the rank-deficient matrix A, i.e., there exist exactly ρ non-zero singular values, the corresponding
lossless economy-size SVD is
A = UsΣsV s
H
, (2.7)
where Us ∈ CM×ρ
and V s ∈ CN×ρ
contain the first ρ columns of U and V , respectively, and
Σs ∈ Rρ×ρ
is the full-rank diagonal subspace matrix containing the singular values on its main
diagonal. Considering only the d ≤ ρ significant singular values, further reduction can be achieved
through a so-called low-rank approximation (or truncated SVD)
A ≈ U′
sΣ′
sV ′
s
H
, (2.8)
where U′
s ∈ CM×d
, V ′
s ∈ CN×d
and Σ′
s ∈ Rd×d
. All three types of SVD are shown in Figure 2.4.
In a MIMO channel measurements context, d referred to as model order, that is, the number of
principal multipath components bearing a strong signal. The low-rank approximation thus isolates
the signal subspace of the measured signal, while treating non-significant multipath components
as noise.
Full SVD
Economy-size SVD
Low-rank approximation
Fig. 2.4. Full SVD, economy-size SVD and low-rank approximation of matrix A ∈ C5×4 with rank ρ = 3
and model order d = 2.
2.3.1 Tensor Ranks
For matrices, the column (row) rank is defined as the dimension of the vector space spanned by the
columns (rows). As a fundamental theorem, the column rank and row rank of a matrix are always
equal. For higher-order tensors, there exist two different rank definitions:
2.3 Subspace-based Decomposition of Tensors 13
• The r-ranks of an R-dimensional tensor are defined as the dimension of the vector space
spanned by the r-mode vectors of the tensor. Consequently, the r-rank is equal to the rank of
the r-mode unfolding. Unlike for matrices, the r-ranks of a tensor are not required to be equal.
• The tensor tank. A tensor A ∈ CM1×M2×···×MR
has rank one if it can be represented via outer
products of R non-zero vectors f(r)
∈ CMr
as
A = f(1)
◦ f(2)
◦ . . . ◦ f(R)
. (2.9)
Consequently, a tensor A has rank r if it can be stated as the linear combination of r rank-one
tensors and if this cannot be accomplished with less than r terms:
A =
r
n=1
f(1)
n ◦ f(2)
n ◦ . . . ◦ f(R)
n (2.10)
Note that
r-rank (A) ≤ rank (A) ∀r = 1, . . . , R , (2.11)
which means that the tensor rank of a higher-order tensor can be larger than all its r-ranks.
2.3.2 The Higher-Order SVD (HOSVD)
Analogously to the SVD of a matrix, we define the Higher-Order SVD (HOSVD) [13] of a tensor
A ∈ CM1×M2×···×MR
as the SVDs of all r-mode unfoldings of a tensor. It is given by
A = S ×1 U1 ×2 U2 · · · ×R UR, (2.12)
where Ur ∈ CMr×Mr
, r = 1, 2, . . . , R are the unitary matrices containing the singular vectors of
the r-th mode unfolding. S ∈ CM1×M2×···×MR
is the core-tensor, which is not diagonal but satisfies
the so-called all-orthogonality conditions [3]. Figure 2.5 depicts a core tensor for an order-3 tensor.
It is shown that only the first ρ1 × ρ2 × ρ3 elements of the core-tensor are non-zero. The size of the
blue cuboid thus indicates the r-ranks ρr of the tensor A, as they were defined in Section 2.3.1.
Fig. 2.5. Core tensor of an order-3 tensor with n-ranks ρ1, ρ2, and ρ3. Only the first ρ1 × ρ2 × ρ3 elements
indicated in blue are non-zero.
Therefore, an economy-size HOSVD of A can be stated as
A = S[s]
×1 U
[s]
1 ×2 U
[s]
2 · · · ×R U
[s]
R , (2.13)
where S[s]
∈ Cρ1×ρ2×···×ρR
as shown in Figure 2.5 and U[s]
r ∈ CMr×ρr
, r = 1, 2, . . . , R contain
the first ρr columns of Ur. An example of a core tensor S with its non-zero part S[s]
is depicted
in Figure 2.5. Note that ρr ≤ Mr for all r = 1, 2, . . . , R.
Finally, for a model order d, the corresponding HOSVD low-rank approximation is
A ≈ S′[s]
×1 U′[s]
1 ×2 U′[s]
2 · · · ×R U′[s]
R (2.14)
where S′[s]
∈ Cd×d×···×d
, and U′[s]
r ∈ CMr×d
, r = 1, 2, . . . , R are the matrices of r-mode singular
vectors.
In practice, the HOSVD is obtained via the SVDs of the matrix unfoldings.
14 2. Tensor Calculus
2.3.3 PARAFAC decomposition
The Parallel Factor Analysis (PARAFAC), a tool that originally stems from the field of psychomet-
rics [4], takes a different approach at decomposing a tensor. While the HOSVD is focussed on the
r-spaces, PARAFAC considers the fact that the SVD can be seen as a decomposition of a matrix
into the sum of a minimal number of rank-one matrices, which are given by the corresponding left
and right singular vectors and weighted by the corresponding singular values. In the same manner,
we can decompose the R-dimensional data tensor into a sum of a minimal number of rank-one
tensors, as they were defined in (2.9). Therefore, the aim of PARAFAC is to decompose a tensor
X of rank d into a sum of at least d rank-one tensors:
A =
d
n=1
f(1)
n ◦ f(2)
n ◦ . . . ◦ f(R)
n , (2.15)
where f(r)
n ∈ CMr
, n = 1, . . . , d. This means that the model order coincides with the tensor rank
as defined in (2.10). By defining the so-called factor matrices F (r)
∈ CMr×d
, which contain the
vectors f
(r)
i as columns
F (r)
= f
(r)
1 , . . . , f
(r)
d ∈ CMr×d
, (2.16)
the PARAFAC decomposition of a tensor A ∈ CM1×M2×···×MR
with model order d can be rewritten
as
A = IR,d ×1 F (1)
×2 F (2)
· · · ×R F (R)
, (2.17)
where IR,d is the R-dimensional identity tensor of size d × d × . . . × d. Its elements are equal
to one for indices i1 = i2 = . . . = iR and zero otherwise. Comparing (2.17) with the HOSVD
low-rank approximation (2.14), the core tensor is replaced by the ”diagonal” identity tensor via
PARAFAC decomposition. The dimensions are thus completely decoupled.
Figure 2.6 illustrates the PARAFAC decomposition for an order-3 tensor; first as a sum of
rank-one tensors according to (2.15), then as r-mode products based decomposition (2.17).
Fig. 2.6. Illustration of PARAFAC decomposition for a 3-way tensor. Above: representation as a sum of
rank-one tensors; below: r-mode products based decomposition.
There exist iterative solutions for accomplishing the PARAFAC decomposition, such as Mul-
tilinear Alternating Least Squares (MALS) [5]. However, the MALS algorithm is not suitable for
the case that the factor matrices are rank deficient [6]. Moreover, it has a high computational com-
plexity and the convergence is not guaranteed, since it is an iterative solution. The solution used in
2.3 Subspace-based Decomposition of Tensors 15
the thesis is Closed-form PARAFAC (CFP) [6], which uses several simultaneous matrix diagonal-
izations based on the HOSVD. The problem here is the computationally expensive task of finding
the correct factor matrix estimates out of a large set of estimates. However, the computational
complexity of the CFP can be drastically reduced by computing only one solution.
3. Data Model
The tensor notation introduced in Section 2 is a convenient way to represent multi-dimensional
signals sampled from antenna grids. For our data model, we assume that d superimposing planar
wavefronts are captured by an R-dimensional (R-D) grid with Mr sensors in each dimension r ∈
{1, . . . , R}. These dimensions can, e.g., be the horizontal and vertical axis of the transmitter
and receiver array, or frequency bins. Each dimension r represents a spatial frequency µ
(r)
i to be
estimated for each path i, i = 1, . . . , d. The spatial frequencies correspond to physical parameters
such as elevation or azimuth angle of the direction-of-departure or direction-of-arrival, time delay
of arrival or Doppler shift.
At a sampling time instant n and sensor m1, . . . , mR, we obtain the single measurement
xm1,...,mR,n =
d
i=1
si,n ·
R
r=1
e(mr−1)j·µ
(r)
i + nm1,...,mR,n , (3.1)
where si,n are the complex symbols from the i-th source at snapshot n. The noise elements
nm1,...,mR,n are i.i.d. ZMCSCG (zero-mean circularly-symmetric complex Gaussian) with vari-
ance σ2
n. Note that in Section 4, this noise is assumed to be white, whereas the colored noise case
is considered in Section 5.
The data are collected in N consecutive time instants, called snapshots. The model order d, that
is, the number of principal multipath components, is assumed to be known. It can be estimated by
using multi-dimensional model order selection schemes [7]. Furthermore, we assume that d ≤ N
and d ≤ Mmax (overdetermined system).
The signal is taken to be narrowband such that the antenna element spacing do not exceed half
a wavelength. Figure 3.1 shows an example of a measurement grid in form of a 2-dimensional
outer-product array (OPA), where all distances ∆
(r)
i for i = 1, 2, 3 and r = 1, 2 can take different
values.
3.1 Matrix Notation
For matrix notation, the measurements have to be aligned into a matrix which is accomplished by
appropriate stacking. If we capture measurements over N subsequent time instants and stack each
16
3.1 Matrix Notation 17
Δ
x1,3 x2,3
1
x3,3
Δ2
x1,2 x2,2 x3,2
x1,1 x2,1 x3,1
1
(1) (1)
Δ11
(2)
Δ11
(2)
Δ2
(2)
Fig. 3.1. 2-dimensional outer-product based array (OPA) of size 3 × 3.
snapshot into a column of a matrix, one obtains for the measurement matrix X ∈ CM×N
X =













x1,1,...,1,1,1 x1,1,...,1,1,2 . . . x1,1,...,1,1,N
x1,1,...,1,2,1 x1,1,...,1,2,2 . . . x1,1,...,1,2,N
...
...
...
...
x1,1,...,1,MR,1 x1,1,...,1,MR,2 . . . x1,1,...,1,MR,N
x1,1,...,2,1,1 x1,1,...,2,1,2 . . . x1,1,...,2,1,N
x1,1,...,2,2,1 x1,1,...,2,2,2 . . . x1,1,...,2,2,N
...
...
...
...
xM1,M2,...,MR−1,MR,1 xM1,M2,...,MR−1,MR,2 . . . xM1,M2,...,MR−1,MR,N













(3.2)
where M = R
r=1 Mr. The additive noise sample can be summarized in a noise matrix N ∈
CM×N
which is stacked in the same fashion as X. Using matrix-vector notation for the data
model (3.1), one obtains
X = A · S + N , (3.3)
where
S =





s1,1 s1,2 . . . s1,N
s2,1 s2,2 . . . s2,N
...
...
...
sd,1 sd,2 . . . sd,N





∈ Cd×N
(3.4)
is the symbol matrix, and A ∈ CM×d
is the so-called joint array steering matrix whose columns
contain the array steering vectors a (µi), i = 1, . . . , d as given in
A = [a (µ1) , a (µ2) , . . . , a (µd)] (3.5)
with µi = µ
(1)
i , µ
(2)
i , . . . , µ
(R)
i
T
. That is, the i-th column of A only contains the R spatial
frequencies µ
(r)
i , r = 1, . . . , R belonging to path i.
The array steering vectors can explicitly be calculated as the Kronecker products (matrix outer
product, see A1) of the array steering vectors of the separate modes through
a (µi) = a(1)
µ
(1)
i ⊗ a(2)
µ
(2)
i ⊗ . . . ⊗ a(R)
µ
(R)
i . (3.6)
18 3. Data Model
The vectors a(r)
µ
(r)
i ∈ CMr×1
denote the response of the array in the r-th mode due to the i-th
wavefront. As an example, for a Uniform Rectangular Array (URA), which is an OPA (Fig. 3.1)
with constant distances over a mode with Mr sensors, we have that
a(r)
µ
(r)
i =








1
ej·µ
(r)
i
e2·j·µ
(r)
i
...
e(Mr−1)·j·µ
(r)
i








. (3.7)
3.2 Tensor Notation
A more natural way to can capture the samples (3.1) over N subsequent time instants is by arrang-
ing them as an R+1-dimensional measurement tensor X ∈ CM1×...×MR×N
. Similarly to (3.3), the
tensor notation reads as
X = A ×R+1 ST
+ N , (3.8)
where A ∈ CM1×...×MR×d
is the array steering tensor of an outer product array (OPA) as in Figure
3.1 given by
A =
d
n=1
a(1)
µ
(1)
i ◦ a(2)
µ
(2)
i ◦ . . . ◦ a(R)
µ
(R)
i . (3.9)
S ∈ Cd×N
is the same transmitted symbol matrix as in (3.4), and N ∈ CM1×...×MR×N
the
noise tensor. Similarly to the procedure in Section 2.3.3, where (2.15) has a structure as (3.9), the
array steering tensor can also be stated as
A = IR+1,d ×1 A(1)
×2 A(2)
. . . ×R A(R)
(3.10)
where A(r)
∈ CMr×d
comprises of
A(r)
= a(r)
µ
(r)
1 , a(r)
µ
(r)
2 , . . . , a(r)
µ
(r)
d . (3.11)
The following relations between the matrix notation from Section 3.1 and the presented tensor
notation hold:
A = [A] T
(R+1) , (3.12)
N = [N] T
(R+1) , (3.13)
X = [X] T
(R+1) , (3.14)
i.e., the measurement matrix X is equal to the transpose of the unfolding of the measurement
tensor X along the last mode. The above steps are also referred to as stacking operations.
4. R-D Parameter Estimation
In this section, multi-dimensional parameter estimation schemes based on subspace decomposition
are presented, where the signal and noise subspaces of the measurement tensor X as in (3.8) are
separated. The number of principal path components d can be estimated according to Model Order
Selection (MOS) schemes such as [7]. The three presented R-dimensional parameter estimation
techniques are R-D Standard ESPRIT (R-D SE), R-D Standard Tensor ESPRIT (R-D STE) – both
of which can only be applied if the shift invariance property [1] holds – and finally closed-form
PARAFAC Parameter Estimation (CFP-PE). Figure 4.1 delivers an overview of all three discussed
schemes and shall help the reader follows the steps presented in the following subsections.
Measurement Tensor
HOSVD
low-rank decomposition
SVD
low-rank decomposition
PARAFAC
decomposition via CFP
Signal subspace tensorSignal subspace matrix
Measurement Tensor
stacking
operation
Shift Invariance (SI)
equations
Peak Search (PS)
Factor matrices
Standard
Tensor-
ESPRIT
(STE)
Standard
ESPRIT
(SE)
Closed-Form
PARAFAC based
Parameter Estimation
(CFP-PE)
Fig. 4.1. R-D Standard ESPRIT (R-D SE), R-D Standard Tensor-ESPRIT (R-D STE) and Closed-Form
PARAFAC based Parameter Estimation (CFP-PE).
4.1 R-D Standard ESPRIT (R-D SE)
Via the stacking operation (3.14), the measurement tensor X is reshaped to a matrix X ∈ CM×N
where R
r=1 Mr . The signal subspace is computed via a low-rank Singular Value Decomposi-
19
20 4. R-D Parameter Estimation
tion (SVD) according to (2.8) as
X ≈ UsΣsV s
H
, (4.1)
where Σs ∈ Rd×d
. Note that the prime symbol is dropped for notational convenience. By exploit-
ing the shift invariance of the antenna array, a low-computational closed-form expression for the
parameter estimation can be deduced [2].
4.2 R-D Standard Tensor-ESPRIT (R-D STE)
This method employs the actual measurement tensor X and separates the signal and noise sub-
spaces via HOSVD low-rank approximation according to (2.14) as
X ≈ S[s]
×1 U
[s]
1 . . . ×R U
[s]
R ×R+1 U
[s]
R+1 , (4.2)
where S(s)
∈ Cr1×...×rR+1
is the core tensor and U[s]
r ∈ CMr×rr
the subspace matrix of the r-th
dimension, and rr = min(Mr, d) is the r-rank of X.
The signal subspace tensor U[s]
∈ CM1×M2×...×MR×d
is therefore
U[s]
= S[s]
×1 U
[s]
1 . . . ×R U
[s]
R . (4.3)
Again, exploiting the shift-invariance structure here, we can build R shift invariance matrices
according to [2].
4.3 Closed-Form PARAFAC based Parameter Estimation (CFP-PE)
The Closed-Form PARAFAC based Parameter Estimation (CFP-PE) scheme has been proposed in
[7]. The measurement tensor X is decomposed via PARAFAC according to (2.17):
X = IR+1,d ×1 F (1)
×2 F (2)
. . . ×R F (R)
×R+1 F (R+1)
, (4.4)
where IR+1,d is the R + 1-dimensional identity tensor and each dimension has size d. The factor
matrices F (r)
∈ CMr×d
are found via the closed-form PARAFAC solution presented in [6].
Comparing with the tensor data model (3.8) and (3.10), one can see that the factor matrices
F (r)
provide estimates for the system’s steering matrices A(r)
and symbol matrix S:
X ≈ IR+1,d ×1 A(1)
. . . ×R A(R)
×R+1 ST
(4.5)
Thus, through PARAFAC decomposition, we are able to find estimates for the correct structure of
the steering matrices A(r)
, regardless of whether the sensor grid fulfils the shift-invariance or not.
This guarantees the flexibility of this scheme regarding the chosen sensor array structure, and leads
to an increased robustness.
Furthermore, the CFP decouples the multi-dimensional data into vectors corresponding to a
certain dimension and source. Therefore, after the CFP, a multi-dimensional problem is trans-
formed into several one-dimensional problems. These one dimensional problems can be solved
via Peak Search (PS) or via Shift Invariance (SI) if applicable for the given sensor grid. Moreover,
the CFP-PE allows to introduce a step called merging dimensions, which is applied to increase the
model order. A subsequent Least Squares Khatri-Rao Factorization (LSKRF) is used to refactorize
the merged factor matrices.
5. R-D Prewhitening
In this section, state-of-the-art tensor-based prewhitening schemes are presented, namely Sequen-
tial Generalized SVD (S-GSVD) and its iterative counterpart I-S-GSVD. The former can be ap-
plied if a noisy-only measurement for the estimation the noise statistics are available, while the
latter scheme can deliver improved estimates even without any information about the noise.
From now on, we thus assume that the additive noise component from (3.8) is colored
X = A ×R+1 S
T
+ N(c)
(5.1)
and that the colored noise tensor N(c)
∈ CM1×...×MR×N
has a Kronecker structure, as can be found
in certain EEG [8] and MIMO applications [9]. The colored noise tensor can thus be stated as
N(c)
(R+1)
= [N](R+1) · (L1 ⊗ L2 ⊗ . . . ⊗ LR)T
, (5.2)
where ⊗ is the Kronecker product (see A1), N is a white noise tensor collecting i.i.d. ZMC-
SCG noise samples with variance σ2
n, and Lr ∈ CMr×Mr
, r = 1, . . . , R are the so-called noise
correlation factors of the r-th dimension of the colored noise tensor.
As proven in [10], (5.2) can be rewritten as
N(c)
= N ×1 L1 ×2 L2 . . . ×R LR , (5.3)
with N ∈ CM1×...×MR×N
denoting a white (=uncorrelated) ZMCSCG noise tensor. Please note
that while the noise tensor N is R+1-dimensional, there are only correlation matrices for the first
R dimensions as we assume that the time samples are uncorrelated. Alternatively, one can say that
LR+1 is given to be an identity matrix, which has no effect on the noise tensor.
The noise covariance matrix on the r-th mode Rr is defined as
E N (c)
(r)
· N (c)
H
(r)
= α · Rr = α · Lr · LH
r , (5.4)
where α is a normalization constant, such that tr(Lr · LH
r ) = Mr. The equivalence be-
tween (5.2), (5.3) and (5.4) is shown in [10].
5.1 Sequential GSVD (S-GSVD)
The Sequential GSVD prewhitening scheme was proposed in [10]. As presented in the following,
it consists of two steps: first, the estimation of the correlation factors Lr from the noise-only
measurement tensor N (c)
. Then, the actual prewhitening scheme can be applied.
21
22 5. R-D Prewhitening
5.1.1 Prewhitening Correlation Factor Estimation (PCFE)
In order to apply the S-GSVD prewhitening scheme, the correlation factors Lr must be estimated
first from the noise-only measurement tensor N(c)
. Dropping the expectation operator from (5.4),
one can estimate the noise covariance matrix Rr for each dimension r = 1, . . . , R by
ˆRr = α′
· N (c)
(r)
· N (c)
H
(r)
= ˆLr · ˆL
H
r , (5.5)
where again α′
is chosen such that tr( ˆRr) = Mr. These estimates then need to be factorized
to obtain the correlation factor estimates ˆLr, e.g. directly via a Cholesky decomposition or via
eigenvalue decomposition (EVD)
ˆRr = Qr · Λ · QH
r , (5.6)
from which follows that
ˆLr = Qr · Λ
1
2 . (5.7)
5.1.2 Tensor Prewhitening Scheme: S-GSVD
Once the estimates ˆL1, . . . , ˆLR ∈ CMr×Mr
of the correlation factor matrices are computed through
(5.5), the S-GSVD prewhitening scheme can be executed as follows (see also Figure 5.2):
1) Prewhiten the measurement tensor X ∈ CM1×M2×MR×N
:
˜X = X ×1
ˆL
−1
1 ×2
ˆL
−1
2 . . . ×R
ˆL
−1
R (5.8)
Note that due to the uncorrelatedness between the time instants, we have only R correlation
factors, while the measurement tensor has R + 1 dimensions. By substituting our coloured
measurement tensor (5.1) in (5.8)
˜X = A ×R+1 S
T
+ N (c)
×1
ˆL
−1
1 ×2
ˆL
−1
2 . . . ×R
ˆL
−1
R (5.9)
= A ×1
ˆL
−1
1 ×2
ˆL
−1
2 . . . ×R
ˆL
−1
R ×R+1 S
T
+ N (5.10)
while taking into account the Kronecker model of the coloured noise tensor (5.3), the multi-
dimensional noise component becomes white. However, the signal component of X has been
distorted through the prewhitening. This must be accounted for in a later dewhitening step.
2) Compute the HOSVD low-rank approximation (2.14) of ˜X
˜X ≈ S[s]
×1 U
[s]
1 ×2 U
[s]
2 . . . ×R U
[s]
R ×R+1 U
[s]
R+1 , (5.11)
such that that corresponding subspace tensor ˜U
[s]
is
˜U
[s]
= S[s]
×1 U
[s]
1 ×2 U
[s]
2 . . . ×R U
[s]
R , (5.12)
where S[s]
∈ Cp1×p2×...×pR×d
, U[s]
r ∈ CMr×pr
such that pr = min (Mi, d) for r = 1, . . . , R.
We assume again that d ≤ N
3) Dewhiten the estimated subspace in order to reconstruct the signal subspace:
U[s]
= ˜U
[s]
×1
ˆL1 ×2
ˆL2 . . . ×R
ˆLR (5.13)
5.2 Iterative Sequential GSVD (I-S-GSVD) 23
Prewhitening
Estimate Parameters
(STE, CFP-PE)
HOSVD
low-rank approximation
Dewhitening
Estimate
via PCFE
S-GSVD
Fig. 5.1. Basic steps of S-GSVD prewhitening scheme with Prewhitening Correlation Factor Estimation
(PCFE).
With the new correctly dewhitenened subspace tensor U[s]
, the parameters can be estimated
according to the Standard Tensor-ESPRIT or CFP based parameter estimation (CFP-PE) scheme
(see Sections 4.2 and 4.3).
Originally, the S-GSVD was derived by applying multiple GSVDs [13] to the measurement
tensor – hence the name sequential GSVD. In this way, the matrix inversions in the prewhitening
step (5.8) can be avoided. However, the procedure presented above is more accurate than the
original S-GSVD and therefore preferable.
5.2 Iterative Sequential GSVD (I-S-GSVD)
If the second-order statistics of the noise cannot be estimated, e.g., if only a small number of
noise snapshots is available, or if the noise cannot be measured without the presence of the signal
component, then Iterative Sequential GSVD (I-S-GSVD) can be used, which was proposed in
conjunction with STE in [12] . The principle idea is to apply the prewhitening correlation factor
estimation (PCFE) from Section 5.1.1 iteratively to compute estimates ˆLr of the correlation factors
Lr. The concept of the I-S-GSVD prewhitening scheme is depicted in Figure 5.2. Contrarily
to [12], the I-S-GSVD concept was expanded by the option to chose CFP-PE in the parameter
estimation step. This conjunction of I-S-GSVD and CFP-PE has not yet been investigated in the
literature and will be scrutinized in the simulations of Section 6.
The I-S-GSVD algorithm works as follows:
1) Initialize ˆLr as Mr × Mr identity matrices for r = 1, . . . , R.
2) Do S-GSVD from Section 5.1.2.
3) Estimate parameters ˆµ
(r)
i via STE or CFP-PE (see Sections 4.2 and 4.3).
24 5. R-D Prewhitening
Initialize
Sequential GSVD
(Section 5.2)
Estimate parameters
(STE, CFP-PE)
Estimate signal matrix
and steering tensor
Update noise tensor
Estimate new
using PCFE
Fig. 5.2. Basic steps of I-S-GSVD iterative prewhitening scheme.
4) From the obtained ˆµ
(r)
i , estimate the array steering tensor ˆA according to the model in (3.9).
Using X and ˆA, calculate the signal matrix:
ˆS = [X](R+1) · ˆA
+
(R+1)
T
, (5.14)
where +
is the Moore-Penrose pseudo inverse.
5) Given ˆA and ˆS, compute an estimate the noise tensor:
ˆN
(c)
= X − ˆA ×R+1
ˆST
(5.15)
6) From ˆN
(c)
, update the estimate ˆLr using PCFE (see Section 5.1.1).
7) Go back to step 2.
According to [12], the root mean square change (RMSC) of the parameter estimates ˆµ
(r)
i be-
tween two iteration can be applied as a stopping criteria. Via simulations in conjunction with STE,
it could be shown that between two and three iterations are always sufficient to achieve conver-
gence. This fact could also be verified for the I-S-GSVD in conjunction with the CFP-PE, as will
be shown in the following section.
6. Simulation Results
In this section, simulations carried out in MATLAB shall demonstrate the performance of the dis-
cussed multi-dimensional parameter estimation techniques and prewhitening schemes using the
R-D harmonic retrieval model of (3.8), where the spatial frequencies µ
(r)
i are drawn from a uni-
form distribution in [−π, π]. The source symbols are i.i.d. ZMCSCG distributed with power equal
to σ2
s for all the sources. The SNR at the receiver is defined as
SNR = 10 · log10
σ2
s
σ2
n
, (6.1)
where σ2
n is the variance of the elements of the white noise tensor N in (3.8) for Section 6.1 and
(5.3) for Section 6.2.
For all simulations that were executed, a scenario with the specifications shown in Table 6.1 was
employed. Five different parameters were estimated, rendering the scenario into a 5-D parameter
estimation problem.
Variable Estimated parameter
Transmitter antenna array M1 = 4 µ
(1)
i : DOD azimuth
of size 4 × 4 M2 = 4 µ
(2)
i : DOD elevation
Receiver antenna array M3 = 4 µ
(3)
i : DOA azimuth
of size 4 × 4 M4 = 4 µ
(4)
i : DOA elevation
Number of frequency bins M5 = 4 µ
(5)
i : path delay
Number of snapshots N = 4
Number of paths d = 3
Table 6.1. Specification of simulated 5-D scenario.
If a simulation is carried out with L realizations for each SNR value, the overall RMSE reads
as
RMSE = E
R
r=1
d
i=1
µ
(r)
i − ˆµ
(r)
i
2
. (6.2)
6.1 White Noise Case
First, different tensor-based parameter estimation schemes are briefly compared for a white-noise
scenario in Figure 6.1. The RMSE was plotted versus the SNR according to (6.2). One can see that
25
26 6. Simulation Results
the tensor-based schemes standard tensor-ESPRIT (STE) (Section 4.2) and closed-form PARAFAC
based (CFP-PE) (Section 4.3) have an improved performance over the ordinary standard ESPRIT
(SE). However, comparing all schemes with the Cramer-Rao bound [14], there is still room for
improvement.
−5 0 5 10
10
−2
10
−1
10
0
SNR [dB]
RMSE
SE
5−D STE
CFP−PE
Det. CRB
Fig. 6.1. RMSE vs. SNR for the white noise case for L = 50 runs.
6.2 Colored Noise Case
In this section, a colored noise is generated according to (5.3). The main goal is to investigate
the performance of the not yet investigated I-G-SVD prewhitening method in conjunction with
CFP-PE from Section 5.2, from now denoted as I-S-CFP-PE. The new scheme is assessed against
the plain non-iterative S-GSVD prewhitening scheme joined with CFP-PE, abbreviated by S-CFP-
PE, as well as plain CFP-PE without prewhitening. In the simulations, it is considered that the
elements of the noise covariance matrix in the r-th mode Rr = Lr · LH
r vary as a function of the
correlation coefficient ρr, similarly as in [10]. As an example the structure of Rr as a function of
ρr for Mr = 3 is given as
Rr =


1 ρ∗
r (ρ∗
r)2
ρr 1 ρ∗
r
ρr
2 ρr 1

 , (6.3)
where ρr is the correlation coefficient of the r-th mode. However, in the following simulations the
correlation coefficients were constant over all correlated dimensions with ρ = ρr∀r = 1, . . . , R.
Note that also other types of correlation models can be used. To be consistent with (5.4), Lr is
normalized such that tr(Lr · LH
r ) = Mr. Again, the RMSE is computed according to (6.2).
First of all, the convergence of I-S-CFP-PE is scrutinzed in Figure 6.2. One can see that
convergence is reached after only three iterations, which is remarkable. Although there is a small
remaining gap to the S-CFP-PE scheme, the RMSE compared to the non-prewhitening scheme
6.2 Colored Noise Case 27
CFP-PE can be improved by a factor of ten for the given scenario with correlation coefficient
ρ = 0.9. It is expected that the remaining error vanishes for an increasing number of snapshots N,
however this was impossible to simulate due to the great computational complexity.
Next, the performance of these scheme is tested over a wide SNR range in a colored noise
scenario with correlation coefficient ρ = 0.9 in Figure 6.3. In the low-SNR region, the I-S-CFP-PE
delivers only marginally better estimates than the non-prewhitening scheme. Again, it is expected
that this gap to the S-CFP-PE scheme error would decrease significantly for an increased number
of snapshots N. In the high-SNR region, the I-S-CFP-PE performs very close to the non-iterative
S-CFP-PE, and is thus able to successfully estimate the noise correlation factors.
In Figure 6.4, it is looked into the performance over a varying correlation coefficient. For low
ρ, that is, low correlation over all dimensions, all three schemes perform equally well. For high
correlation, the estimate can be improved drastically by the prewhitening schemes. At the chosen
SNR of 20dB, the I-S-GSVD is always very close to the performance of the non-iterative scheme.
Finally, in Figure 6.5, the performance of the schemes is assessed for a positioning error sce-
nario. While all before mentioned simulations were executed in shift-invariant outer product ar-
rays, the sensor array is made non-shift invariant in this simulation. To this end, the antennas of
the first two dimensions, e.g., the 2-dimensional receiver antenna array, are randomly misplaced
with an positioning error variance ρp. For this scenario, CFP combined with Peak Search (PS) can
be successfully applied, while the CFP utilizing shift invariance (SI) naturally performs as bad as
standard tensor-ESPRIT S-GSVD prewhitening.
1 2 3 4 5 6 7 8
10
−3
10
−2
10
−1
10
0
Iterations
RMSE
CFP−PE w/o PWT
S−CFP−PE
I−S−CFP−PE
Fig. 6.2. RMSE vs. Iterations with SNR= 15dB, correlation coefficient ρ = 0.9 and L = 20 runs.
28 6. Simulation Results
−10 0 10 20 30 40 50 60
10
−5
10
−4
10
−3
10
−2
10
−1
10
0
10
1
SNR[dB]
RMSE
CFP−PE w/o PWT
S−CFP−PE
I−S−CFP−PE
Fig. 6.3. RMSE vs. SNR with correlation coefficient ρ = 0.9, K = 4 iterations and L = 20 runs.
0 0.2 0.4 0.6 0.8 1
10
−4
10
−3
10
−2
10
−1
Correlation coefficient ρ
i
RMSE
CFP−PE w/o PWT
S−CFP−PE
I−S−CFP−PE
Fig. 6.4. RMSE vs. Correlation coefficient with SNR= 20dB, K = 4 iterations and L = 20 runs.
6.2 Colored Noise Case 29
10
−4
10
−2
10
0
10
−4
10
−3
10
−2
10
−1
10
0
10
1
Variance σ
p
RMSE
CFP−PE w/o PWT
S−CFP−PE (SI)
S−CFP−PE (PS)
I−S−CFP (PS)
STE S−GVD
Fig. 6.5. RMSE vs. Array Spacing Variance with SNR= 40dB, correlation coefficient ρ = 0.9, K = 4
iterations and L = 15 runs.
7. Conclusions
The tensor-based parameter techniques presented in this thesis achieve an improved accuracy
compared to matrix-based schemes. The advantage of ESPRIT-based schemes is their low com-
putational complexity through the closed-form shift-invariance equation, while the closed-form
PARAFAC based parameter estimation can be praised for its applicability to mixed array geome-
tries and the robustness to arrays with positioning errors.
For scenarios with Kronecker colored noise, the results show that the proposed tensor-based
prewhitening improves remarkably the estimation accuracy of the plain CFP parameter estimator,
while retaining its advantages as written above.
Simulations assessed the performance of the proposed iterative tensor-based S-GSVD
prewhitening in conjunction with a CFP based parameter estimator. This iterative algorithm can
achieve both a very good estimation of signal parameters and of the noise variance given a large
number of snapshots. The iteration converges very fast and the remaining error is small. The per-
formance is very close to estimation obtained using the non-iterative S-GSVD prewhitening with
knowledge of the noise covariance information.
Pointers for future research could be the carry-out of simulations with more advanced and
realistic channel models, e.g. in geometry-based scenarios.
30
Appendix
A1 The Kronecker product
The Kronecker product is the outer product of two matrices. Given A ∈ CM×N
and B ∈ CP ×Q
,
the Kronecker product is the block matrix
A ⊗ B =



a11B . . . a1N B
...
...
...
aM1B
... aMN B


 ∈ C(M·P )×(N×Q)
. (A1)
31
Bibliography
[1] Roy, R.; Kailath, T.: ESPRIT – estimation of signal parameters via rotational invariance
techniques. – In: IEEE Transactions on Acoustics, Speech, and Signal Processing 37 (Juli
1989), S. 984–995.
[2] Haardt, M.; Roemer, F.; Galdo, G. D.: Higher-order SVD based subspace estimation to im-
prove the parameter estimation accuracy in multi-dimensional harmonic retrieval problems. –
In: IEEE Transactions on Signal Processing 56 (Juli 2008), S. 3198–3213.
[3] De Lathauwer, L.; De Moor, B.; Vandewalle, J.: A multilinear singular value decomposi-
tion. – In: SIAM J. Matrix Anal. Appl. 21(4) (2000).
[4] Cattell, R. B.: Parallel proportional profiles and other principles for determining the choice
of factors by rotation. – In: Psychometrika 9 (Dez. 1944).
[5] Bro, R.; Sidiropoulos, N.; Giannakis, G. B.: A fast least squares algorithm for separating
trilinear mixtures. – In: Proc. Int. Workshop on Independent Component Analysis for Blind
Signal Separation (ICA 99), Jan. 1999. S. 289–294.
[6] Roemer, F.; Haardt, M.: A closed-form solution for multilinear PARAFAC decompositions. –
In: Proc. 5th IEEE Sensor Array and Multich. Sig. Proc. Workshop (SAM 2008), Darmstadt,
Germany, Juli 2008. S. 487–491.
[7] da Costa, J. P. C. L.; Roemer, F.; Weis, M.; Haardt, M.: Robust R-D parameter estimation via
closed-form PARAFAC. – In: International ITG Workshop on Smart Antennas (WSA 2010),
Dresden, Germany, Marz 2010. S. 99–106.
[8] Huizenga, H. M.; de Munck, J. C.; Waldorp, L. J.; Grasman, R. P. P. P.: Spatiotemporal
eeg/meg source analysis based on a parametric noise covariance model. – In: IEEE Transac-
tions on Biomedical Engineering 49 (Juni 2002), S. 533 – 539.
[9] Park, B.; Wong, T. F.: Training sequence optimization in MIMO systems with colored
noise. – In: Military Communications Conference (MILCOM 2003), Gainesville, USA, Okt.
2003.
[10] da Costa, J. P. C. L.; Romer, F.; Haardt, M.: Sequential GSVD based prewhitening for multi-
dimensional HOSVD based subspace estimation. – In: Proc. International ITG Workshop on
Smart Antennas (WSA 2009), Berlin, Germany, Feb. 2009.
[11] da Costa, J. P. C. L.; Haardt, M. et al.: Robust R-D parameter estimation via closed-form
PARAFAC in kronecker colored environments. – In: International Symposium on Wireless
Communication Systems (ISWCS 2010), York, United Kingdom, Sep. 2010. S. 115–119.
[12] da Costa, J. P. C. L.; Roemer, F.; Haardt, M.: Iterative sequential GSVD (I-S-GSVD) based
prewhitening for multidimensional HOSVD based subspace estimation without knowledge
of the noise covariance information. – In: International ITG Workshop on Smart Antennas
(WSA 2010), Dresden, Germany, Marz 2010. S. 151–155.
32
Bibliography 33
[13] Vandewalle, J.; Lathauwer, L. D.; Comon, P.: The generalized higher order singular value
decomposition and the oriented signal-to-signal ratios of pairs of signal tensors and their use
in signal processing. – In: European Conference on Circuit Theory and Design, Krakow,
Poland, Sep. 2003.
[14] Stoica, P.; Nehorai, A.: Music, maximum likelihood, and cramer-rao bound. – In: IEEE
Transactions on Acoustics, Speech, and Signal Processing 37 (Mai 1989), S. 720 – 741.

Más contenido relacionado

La actualidad más candente

Chp%3 a10.1007%2f978 3-642-55753-8-3
Chp%3 a10.1007%2f978 3-642-55753-8-3Chp%3 a10.1007%2f978 3-642-55753-8-3
Chp%3 a10.1007%2f978 3-642-55753-8-3
Sabina Czyż
 
PFM - Pablo Garcia Auñon
PFM - Pablo Garcia AuñonPFM - Pablo Garcia Auñon
PFM - Pablo Garcia Auñon
Pablo Garcia Au
 

La actualidad más candente (9)

Chp%3 a10.1007%2f978 3-642-55753-8-3
Chp%3 a10.1007%2f978 3-642-55753-8-3Chp%3 a10.1007%2f978 3-642-55753-8-3
Chp%3 a10.1007%2f978 3-642-55753-8-3
 
1406
14061406
1406
 
Averaging Method for PWM DC-DC Converters Operating in Discontinuous Conducti...
Averaging Method for PWM DC-DC Converters Operating in Discontinuous Conducti...Averaging Method for PWM DC-DC Converters Operating in Discontinuous Conducti...
Averaging Method for PWM DC-DC Converters Operating in Discontinuous Conducti...
 
Transport Properties of Graphene Doped with Adatoms
Transport Properties of Graphene Doped with AdatomsTransport Properties of Graphene Doped with Adatoms
Transport Properties of Graphene Doped with Adatoms
 
Ch12
Ch12Ch12
Ch12
 
ANALYTICAL DESIGN OF FIRST-ORDER CONTROLLERS FOR THE TCP/AQM SYSTEMS WITH TIM...
ANALYTICAL DESIGN OF FIRST-ORDER CONTROLLERS FOR THE TCP/AQM SYSTEMS WITH TIM...ANALYTICAL DESIGN OF FIRST-ORDER CONTROLLERS FOR THE TCP/AQM SYSTEMS WITH TIM...
ANALYTICAL DESIGN OF FIRST-ORDER CONTROLLERS FOR THE TCP/AQM SYSTEMS WITH TIM...
 
vonmoll-paper
vonmoll-papervonmoll-paper
vonmoll-paper
 
Economia01
Economia01Economia01
Economia01
 
PFM - Pablo Garcia Auñon
PFM - Pablo Garcia AuñonPFM - Pablo Garcia Auñon
PFM - Pablo Garcia Auñon
 

Similar a Multi-Dimensional Parameter Estimation and Prewhitening

Hussain-Md Sabbir-MASc-ECED-July-2014
Hussain-Md Sabbir-MASc-ECED-July-2014Hussain-Md Sabbir-MASc-ECED-July-2014
Hussain-Md Sabbir-MASc-ECED-July-2014
Md Sabbir Hussain
 
MSc Thesis Jochen Wolf
MSc Thesis Jochen WolfMSc Thesis Jochen Wolf
MSc Thesis Jochen Wolf
Jochen Wolf
 
Algorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed SensingAlgorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed Sensing
Aqib Ejaz
 
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
Kanika Anand
 
MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...
MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...
MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...
Luis Felipe Paulinyi
 
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset ReturnsDonhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Brian Donhauser
 
Thesis multiuser + zf sic pic thesis mincheol park
Thesis multiuser + zf sic pic thesis mincheol parkThesis multiuser + zf sic pic thesis mincheol park
Thesis multiuser + zf sic pic thesis mincheol park
nhusang26
 

Similar a Multi-Dimensional Parameter Estimation and Prewhitening (20)

Hussain-Md Sabbir-MASc-ECED-July-2014
Hussain-Md Sabbir-MASc-ECED-July-2014Hussain-Md Sabbir-MASc-ECED-July-2014
Hussain-Md Sabbir-MASc-ECED-July-2014
 
論文
論文論文
論文
 
Bojja_thesis_2015
Bojja_thesis_2015Bojja_thesis_2015
Bojja_thesis_2015
 
Thesis
ThesisThesis
Thesis
 
PhDThesis
PhDThesisPhDThesis
PhDThesis
 
repport christian el hajj
repport christian el hajjrepport christian el hajj
repport christian el hajj
 
Final report
Final reportFinal report
Final report
 
EVM_TFM_printed
EVM_TFM_printedEVM_TFM_printed
EVM_TFM_printed
 
MSc Thesis Jochen Wolf
MSc Thesis Jochen WolfMSc Thesis Jochen Wolf
MSc Thesis Jochen Wolf
 
Algorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed SensingAlgorithms for Sparse Signal Recovery in Compressed Sensing
Algorithms for Sparse Signal Recovery in Compressed Sensing
 
Design of two stage OPAMP
Design of two stage OPAMPDesign of two stage OPAMP
Design of two stage OPAMP
 
mchr dissertation2
mchr dissertation2mchr dissertation2
mchr dissertation2
 
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
 
[PFE] Design and implementation of an AoA, AS and DS estimator on FPGA-based...
[PFE]  Design and implementation of an AoA, AS and DS estimator on FPGA-based...[PFE]  Design and implementation of an AoA, AS and DS estimator on FPGA-based...
[PFE] Design and implementation of an AoA, AS and DS estimator on FPGA-based...
 
MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...
MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...
MSc Thesis - Luis Felipe Paulinyi - Separation Prediction Using State of the ...
 
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset ReturnsDonhauser - 2012 - Jump Variation From High-Frequency Asset Returns
Donhauser - 2012 - Jump Variation From High-Frequency Asset Returns
 
Edge-Coupled Bandpass Microstrip Filter Design
Edge-Coupled Bandpass Microstrip Filter DesignEdge-Coupled Bandpass Microstrip Filter Design
Edge-Coupled Bandpass Microstrip Filter Design
 
Edge-Coupled Bandpass Filter Design
Edge-Coupled Bandpass Filter DesignEdge-Coupled Bandpass Filter Design
Edge-Coupled Bandpass Filter Design
 
Internship Report: Interaction of two particles in a pipe flow
Internship Report: Interaction of two particles in a pipe flowInternship Report: Interaction of two particles in a pipe flow
Internship Report: Interaction of two particles in a pipe flow
 
Thesis multiuser + zf sic pic thesis mincheol park
Thesis multiuser + zf sic pic thesis mincheol parkThesis multiuser + zf sic pic thesis mincheol park
Thesis multiuser + zf sic pic thesis mincheol park
 

Último

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Último (20)

MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 

Multi-Dimensional Parameter Estimation and Prewhitening

  • 1. Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich University of Technology Institute for Circuit Theory and Signal Processing Univ.-Prof. Dr.techn. Josef A. Nossek
  • 2. Date of Start: 01/12/2011 Date of Examination: 26/03/2012 Supervisors: M.Sc. Qing Bai (Munich University of Technology), Prof. Dr.-Ing. Jo˜ao Paulo C. L. da Costa (Universidade de Bras´ılia) Theresienstr. 90 80290 Munich Germany 26/03/2012
  • 3. Contents 1. Introduction 8 2. Tensor Calculus 10 2.1 r-Mode Unfolding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 r-Mode Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Subspace-based Decomposition of Tensors . . . . . . . . . . . . . . . . . . . . . . 12 2.3.1 Tensor Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3.2 The Higher-Order SVD (HOSVD) . . . . . . . . . . . . . . . . . . . . . . 13 2.3.3 PARAFAC decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3. Data Model 16 3.1 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2 Tensor Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4. R-D Parameter Estimation 19 4.1 R-D Standard ESPRIT (R-D SE) . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 R-D Standard Tensor-ESPRIT (R-D STE) . . . . . . . . . . . . . . . . . . . . . . 20 4.3 Closed-Form PARAFAC based Parameter Estimation (CFP-PE) . . . . . . . . . . 20 5. R-D Prewhitening 21 5.1 Sequential GSVD (S-GSVD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 5.1.1 Prewhitening Correlation Factor Estimation (PCFE) . . . . . . . . . . . . 22 5.1.2 Tensor Prewhitening Scheme: S-GSVD . . . . . . . . . . . . . . . . . . . 22 5.2 Iterative Sequential GSVD (I-S-GSVD) . . . . . . . . . . . . . . . . . . . . . . . 23 6. Simulation Results 25 6.1 White Noise Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 6.2 Colored Noise Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 7. Conclusions 30 Appendix 31 A1 The Kronecker product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Bibliography 32 3
  • 4. List of Figures 1.1 MIMO multipath scenario with 2×2 antenna arrays on the transmitter and receiver side. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1 Examples and notation for a scalar, vector, matrix and order-3 tensor. . . . . . . . . 10 2.2 Unfoldings of a 4 × 5 × 3-tensor. Left: the 1-mode vectors, center: the 2- mode vectors, right: the 3-mode vectors which are then used as columns of the corre- sponding matrix unfolding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 n-mode products of an order-3 tensor. Left: the 1-mode product, center: the 2- mode product, right: the 3-mode product. . . . . . . . . . . . . . . . . . . . . . . 11 2.4 Full SVD, economy-size SVD and low-rank approximation of matrix A ∈ C5×4 with rank ρ = 3 and model order d = 2. . . . . . . . . . . . . . . . . . . . . . . . 12 2.5 Core tensor of an order-3 tensor with n-ranks ρ1, ρ2, and ρ3. Only the first ρ1 × ρ2 × ρ3 elements indicated in blue are non-zero. . . . . . . . . . . . . . . . . . . . 13 2.6 Illustration of PARAFAC decomposition for a 3-way tensor. Above: representation as a sum of rank-one tensors; below: r-mode products based decomposition. . . . . 14 3.1 2-dimensional outer-product based array (OPA) of size 3 × 3. . . . . . . . . . . . . 17 4.1 R-D Standard ESPRIT (R-D SE), R-D Standard Tensor-ESPRIT (R-D STE) and Closed-Form PARAFAC based Parameter Estimation (CFP-PE). . . . . . . . . . . 19 5.1 Basic steps of S-GSVD prewhitening scheme with Prewhitening Correlation Fac- tor Estimation (PCFE). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.2 Basic steps of I-S-GSVD iterative prewhitening scheme. . . . . . . . . . . . . . . 24 6.1 RMSE vs. SNR for the white noise case for L = 50 runs. . . . . . . . . . . . . . . 26 6.2 RMSE vs. Iterations with SNR= 15dB, correlation coefficient ρ = 0.9 and L = 20 runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 6.3 RMSE vs. SNR with correlation coefficient ρ = 0.9, K = 4 iterations and L = 20 runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6.4 RMSE vs. Correlation coefficient with SNR= 20dB, K = 4 iterations and L = 20 runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6.5 RMSE vs. Array Spacing Variance with SNR= 40dB, correlation coefficient ρ = 0.9, K = 4 iterations and L = 15 runs. . . . . . . . . . . . . . . . . . . . . . . . . 29 4
  • 5. Acknowledgements I would like to express my sincerest gratitude to Prof. Dr.-Ing. Jo˜ao Paulo Carvalho Lustosa da Costa, adjunct professor at Universidade de Bras´ılia (UnB), Brazil, for having given me the op- portunity to work on this interesting topic under his supervision. His bright ideas and professional guidance regarding my thesis, along with his invaluable support in every day issues have made this work possible and made my stay in Bras´ılia unforgettable. I am also very thankful for the funding from the German Academic Exchange Service (DAAD) through the RISE weltweit programme, which has enabled my internship at UnB. Finally, I would like to thank M.Sc. Qing Bai and Univ.-Prof. Dr.techn. Josef A. Nossek from the Institute for Circuit Theory and Signal Processing at Technical University of Munich (TUM) for the acceptance of this thesis and good cooperation.
  • 6.
  • 7. Abstract High-resolution parameter estimation is a research field that has gained considerable attention in the past decades. A typical application is in MIMO channel measurements, where parameters such as direction-of-arrival (DOA), direction-of-departure (DOD), path delay and Doppler spread are desired to be extracted from the measured signal. Recently, subspace-based parameter estimation techniques have been improved by taking ad- vantage of the multi-dimensional structure inherent in the measurement signal. This is accom- plished by adopting subspace-based decompositions using tensor calculus, i.e., higher-dimensional matrices. State-of-the-art tensor-based decompositions include Higher-Order Singular Value De- composition (HOSVD) low-rank approximation and Closed-Form Parallel Factor Analysis (CFP). The former served as the basis for the Standard Tensor-ESPRIT (STE) and the latter laid the foun- dation for CFP based parameter estimation scheme (CFP-PE), which are both presented in the first part of this thesis. The latter technique is appealing since it is applicable to mixed arbitrary arrays and outer product based arrays. The second part of this thesis investigates the case that the parameter estimation is subject to the presence of colored noise or interference, which can severely deteriorate the estimation accuracy. In order to avoid this, tensor-based prewhitening techniques are applied which exploit the Kro- necker structure of the noise correlation matrices. Assuming that estimates of the noise covariance factors are available, e.g., through a noise-only measurement, the estimation accuracy can be sig- nificantly improved by using Sequential Generalized Singular Value Decomposition (S-GSVD). In case the noise covariance information is unknown, Iterative Sequential Generalized Singular Value Decomposition (I-S-GSVD) can successfully be applied. These tensor-based prewhiten- ing techniques, S-GSVD and I-S-GSVD, can each be combined with the above-mentioned multi- dimensional HOSVD and CFP based parameter estimation schemes. As a novelty in this thesis, the I-S-GSVD prewhitening in conjunction with CFP based param- eter estimation is proposed. In this way, the advantages of both techniques are joined, that is, the suitability of I-S-GSVD for data contaminated with colored noise without knowledge of the noise covariance, and the applicability of CFP to mixed array geometries and the robustness to arrays with positioning errors.
  • 8. 1. Introduction High-resolution parameter estimation involves the extraction of relevant parameters from a set of R-dimensional (R-D) data measured by an antenna array. In the field of MIMO channel sound- ing, the considered dimensions of the measured data can correspond to time, frequency, or spa- tial dimensions, i.e., the measurements captured by one- or two-dimensional antenna arrays at the transmitter and the receiver. The estimated parameters include direction-of-arrival (DOA), direction-of-departure (DOD), Doppler spread, or path delay. In this context, the desired param- eters are also called spatial frequencies. A typical multipath scenario with 2 × 2 antenna arrays at the transmitter and receiver side is illustrated in Figure 1.1. Other applications of parameter estimation are manifold, reaching from radar and sonar to biomedical imaging and seismology. TX RX Direction-of- arrival (DOA) Direction-of- departure (DOD) Fig. 1.1. MIMO multipath scenario with 2 × 2 antenna arrays on the transmitter and receiver side. A wide class of efficient parameter estimation schemes using subspace decomposition are based on Standard ESPRIT (SE) [1], which exploits the symmetries present in a one-dimensional antenna array. A generalized scheme which makes Standard ESPRIT applicable to multi- dimensional measurements is referred to as R-D Standard ESPRIT (R-D SE) [2], in which the R-dimensional data is unfolded into a matrix via a stacking operation. Obviously, this represen- tation sees the problem from just one perspective, i.e., one projection, and neglects the R-D grid structure inherent in the data. Consequently, parameters cannot be estimated properly when signals are not resolvable in certain dimension. A possibility to keep the multi-dimensional structure is to express the estimation problem using higher-dimensional matrices, so-called tensors. By consid- ering all dimensions as a whole, it is possible to estimate parameters even if they are not resolvable for each dimension separately, and the resolution, accuracy, and robustness can be improved. Tensor-based parameter estimation schemes have gained attention in the past few years and are presented in the first part of this thesis. Tensor-based extensions of the ESPRIT scheme have been developed recently, namely Standard Tensor-ESPRIT (STE) and Unitary Tensor-ESPRIT (UTE) [2], which utilize a tensor extension of the Singular Value Decomposition (SVD), the so- called Higher-Order SVD (HOSVD) [3]. However, one harsh constraint on ESPRIT schemes 8
  • 9. 1. Introduction 9 is imposed by the shift-invariance property, which stipulates that the antenna array must have a specific symmetric lattice structure. Positioning errors in real antenna arrays, for example, lead to a violation of this constraint. Schemes based on Parallel Factor Analysis (PARAFAC) analysis, a tool rooted in psychometrics [4], do not require the shift-invariance property, as they can be applied to arbitrary array geometries. There exist iterative solutions for PARAFAC decomposition such as Alternating Least Squares (ALS) [5], which we do not consider in this thesis in favour of the closed-form PARAFAC (CFP) [6] solution. Based on this closed form scheme, the closed- form PARAFAC based Parameter Estimation scheme (CFP-PE) [7] was proposed, which delivers accurate estimates for arbitrary arrays and is robust against positioning errors. The second part of this thesis is dedicated to prewhitening schemes that mitigate the effect of multi-dimensional colored noise or interference present at the receiver and/or transmitter antennas. Since the colored noise affects more the signal component, its presence can severely deteriorate the estimation accuracy. Prewhitening aims to distribute the noise power evenly across the noise space to improve the estimation accuracy. Moreover, the presented schemes assume that the colored noise has a Kronecker structure, as can be found in certain EEG [8] and MIMO applications [9], where the noise covariance matrix is taken to be the Kronecker product of the temporal and spatial correlation matrices. A tensor-based prewhitening scheme that exploits the inherent Kronecker structure of the noise is Sequential Generalized Singular Value Decomposition (S-GSVD), which can be applied if the second order statistics of the noise are known. This scheme was combined with subspace decom- positions via HOSVD [10] and closed-form PARAFAC [11]. Both combinations have an improved accuracy over matrix based prewhitening schemes, as well as high computational efficiency. The iterative counterpart of the above prewhitening scheme (I-S-GSVD) [12] can be used if noise samples cannot be collected without the presence of the signal, thus hindering a noise statis- tics estimation. The proposal in this thesis is to combine I-S-GSVD with CFP decomposition. In this way, one joins the advantages of both techniques, that is, the suitability of I-S-GSVD for data contaminated with colored noise without knowledge of noise statistics, and the applicability of CFP to mixed array geometries as well as the robustness to arrays with positioning errors. The remainder of this thesis is organized as follows. A preliminary introduction to tensor calculus and subspace decomposition of tensor-shaped data is given in Section 2. The data model and its tensor notation are presented in Section 3. The basic concepts of the above mentioned multi-dimensional parameter estimation schemes are explained in Section 4. Efficient tensor-based prewhitening schemes are discussed in Section 5. Section 6 assesses the performance and accuracy of the presented methods in MATLAB. Finally, conclusions are drawn in Section 7.
  • 10. 2. Tensor Calculus The following section aims at familiarizing the reader with fundamental tensor calculus, which builds the basis for all multi-dimensional parameter estimation and prewhitening techniques pre- sented in this thesis. The notation is in accordance with [3]. Furthermore, the tensor-extension of the Singular Value Decomposition (SVD), the so-called Higher-Order SVD, is presented. In essence, tensors are higher-dimensional matrices. An order-R tensor (also called R- dimensional or R-way tensor) is denoted by the calligraphic variable A ∈ CM1×M2×···×MR , (2.1) which means that A has Mr complex elements along the dimension (or mode) r for r = 1, . . . , R. A single tensor element is symbolized by am1,m2,...,mR ∈ C , ir = 1, . . . , Mr , r = 1, . . . , R . (2.2) In this sense, an order-0 tensor is a scalar, an order-1 tensor is equivalent to a vector, and an order-2 tensor represents a matrix. Order-3 tensors can be thought of as elements arranged in a cuboid. Higher-dimensional tensors (R > 3) go beyond graphical imagination, yet are the most natural way to represent the data sampled from antenna grids, as will be shown later on. An illustrative explanation together with the notation used in this thesis is shown in Fig. 2.1. Fig. 2.1. Examples and notation for a scalar, vector, matrix and order-3 tensor. 2.1 r-Mode Unfolding The r-mode unfolding of a tensor A is denoted as [A](r) ∈ CMr×(M1·M2·...·Mr−1·Mr+1·...·MR) (2.3) 10
  • 11. 2.2 r-Mode Product 11 and represents the matrix of r-mode vectors of the tensor A. The r-mode vectors of a tensor are obtained by varying the r-th index within its range (1, . . . , Mr) and keeping all the other indices fixed. In other words, unfolding a tensor means to slice it into vectors along a certain dimension r and rearrange them as a matrix. As an example, all possible r-mode vectors of an order-3 tensor of size 4 × 5 × 3 are shown in Fig. 2.2. The order for rearranging the columns is chosen conform to [3] and indicated by the arrows in the figure. Fig. 2.2. Unfoldings of a 4 × 5 × 3-tensor. Left: the 1-mode vectors, center: the 2- mode vectors, right: the 3-mode vectors which are then used as columns of the corresponding matrix unfolding. 2.2 r-Mode Product The r-mode product of a tensor A ∈ CM1×M2×···×MR and a matrix U ∈ CJr×Mr along the r-th mode is denoted as B = A ×r U ∈ CM1×M2×···×Jr×···×MR . (2.4) Note that the number of elements in the r-th dimension of A , Mr, must match the number of columns in U. The r-mode product is obtained by multiplying all r-mode vectors of A from the left-hand side by the matrix U. It follows that [A ×r U](r) = U · [A](r) . (2.5) Fig. 2.3 shows possible r-mode products of the order-3 tensor A with matrices U1, U2 and U3. Fig. 2.3. n-mode products of an order-3 tensor. Left: the 1-mode product, center: the 2- mode product, right: the 3-mode product.
  • 12. 12 2. Tensor Calculus 2.3 Subspace-based Decomposition of Tensors Since the parameter estimation techniques presented in this thesis are based on the analysis of the signal subspace, methods for decomposing the subspace of the tensor-shaped measurements are required. A technique that is commonly applied in conventional matrix-based parameter estimation methods (e.g., in standard ESPRIT) is the Singular Value Decomposition (SVD). Recall the SVD of a matrix A ∈ CM×N , which is defined as A = UΣV H , (2.6) where U ∈ CM×M and V ∈ CN×N are unitary matrices and Σ ∈ RM×N is a pseudo-diagonal matrix containing the non-negative singular values of A ordered by magnitude. If ρ is the rank of the rank-deficient matrix A, i.e., there exist exactly ρ non-zero singular values, the corresponding lossless economy-size SVD is A = UsΣsV s H , (2.7) where Us ∈ CM×ρ and V s ∈ CN×ρ contain the first ρ columns of U and V , respectively, and Σs ∈ Rρ×ρ is the full-rank diagonal subspace matrix containing the singular values on its main diagonal. Considering only the d ≤ ρ significant singular values, further reduction can be achieved through a so-called low-rank approximation (or truncated SVD) A ≈ U′ sΣ′ sV ′ s H , (2.8) where U′ s ∈ CM×d , V ′ s ∈ CN×d and Σ′ s ∈ Rd×d . All three types of SVD are shown in Figure 2.4. In a MIMO channel measurements context, d referred to as model order, that is, the number of principal multipath components bearing a strong signal. The low-rank approximation thus isolates the signal subspace of the measured signal, while treating non-significant multipath components as noise. Full SVD Economy-size SVD Low-rank approximation Fig. 2.4. Full SVD, economy-size SVD and low-rank approximation of matrix A ∈ C5×4 with rank ρ = 3 and model order d = 2. 2.3.1 Tensor Ranks For matrices, the column (row) rank is defined as the dimension of the vector space spanned by the columns (rows). As a fundamental theorem, the column rank and row rank of a matrix are always equal. For higher-order tensors, there exist two different rank definitions:
  • 13. 2.3 Subspace-based Decomposition of Tensors 13 • The r-ranks of an R-dimensional tensor are defined as the dimension of the vector space spanned by the r-mode vectors of the tensor. Consequently, the r-rank is equal to the rank of the r-mode unfolding. Unlike for matrices, the r-ranks of a tensor are not required to be equal. • The tensor tank. A tensor A ∈ CM1×M2×···×MR has rank one if it can be represented via outer products of R non-zero vectors f(r) ∈ CMr as A = f(1) ◦ f(2) ◦ . . . ◦ f(R) . (2.9) Consequently, a tensor A has rank r if it can be stated as the linear combination of r rank-one tensors and if this cannot be accomplished with less than r terms: A = r n=1 f(1) n ◦ f(2) n ◦ . . . ◦ f(R) n (2.10) Note that r-rank (A) ≤ rank (A) ∀r = 1, . . . , R , (2.11) which means that the tensor rank of a higher-order tensor can be larger than all its r-ranks. 2.3.2 The Higher-Order SVD (HOSVD) Analogously to the SVD of a matrix, we define the Higher-Order SVD (HOSVD) [13] of a tensor A ∈ CM1×M2×···×MR as the SVDs of all r-mode unfoldings of a tensor. It is given by A = S ×1 U1 ×2 U2 · · · ×R UR, (2.12) where Ur ∈ CMr×Mr , r = 1, 2, . . . , R are the unitary matrices containing the singular vectors of the r-th mode unfolding. S ∈ CM1×M2×···×MR is the core-tensor, which is not diagonal but satisfies the so-called all-orthogonality conditions [3]. Figure 2.5 depicts a core tensor for an order-3 tensor. It is shown that only the first ρ1 × ρ2 × ρ3 elements of the core-tensor are non-zero. The size of the blue cuboid thus indicates the r-ranks ρr of the tensor A, as they were defined in Section 2.3.1. Fig. 2.5. Core tensor of an order-3 tensor with n-ranks ρ1, ρ2, and ρ3. Only the first ρ1 × ρ2 × ρ3 elements indicated in blue are non-zero. Therefore, an economy-size HOSVD of A can be stated as A = S[s] ×1 U [s] 1 ×2 U [s] 2 · · · ×R U [s] R , (2.13) where S[s] ∈ Cρ1×ρ2×···×ρR as shown in Figure 2.5 and U[s] r ∈ CMr×ρr , r = 1, 2, . . . , R contain the first ρr columns of Ur. An example of a core tensor S with its non-zero part S[s] is depicted in Figure 2.5. Note that ρr ≤ Mr for all r = 1, 2, . . . , R. Finally, for a model order d, the corresponding HOSVD low-rank approximation is A ≈ S′[s] ×1 U′[s] 1 ×2 U′[s] 2 · · · ×R U′[s] R (2.14) where S′[s] ∈ Cd×d×···×d , and U′[s] r ∈ CMr×d , r = 1, 2, . . . , R are the matrices of r-mode singular vectors. In practice, the HOSVD is obtained via the SVDs of the matrix unfoldings.
  • 14. 14 2. Tensor Calculus 2.3.3 PARAFAC decomposition The Parallel Factor Analysis (PARAFAC), a tool that originally stems from the field of psychomet- rics [4], takes a different approach at decomposing a tensor. While the HOSVD is focussed on the r-spaces, PARAFAC considers the fact that the SVD can be seen as a decomposition of a matrix into the sum of a minimal number of rank-one matrices, which are given by the corresponding left and right singular vectors and weighted by the corresponding singular values. In the same manner, we can decompose the R-dimensional data tensor into a sum of a minimal number of rank-one tensors, as they were defined in (2.9). Therefore, the aim of PARAFAC is to decompose a tensor X of rank d into a sum of at least d rank-one tensors: A = d n=1 f(1) n ◦ f(2) n ◦ . . . ◦ f(R) n , (2.15) where f(r) n ∈ CMr , n = 1, . . . , d. This means that the model order coincides with the tensor rank as defined in (2.10). By defining the so-called factor matrices F (r) ∈ CMr×d , which contain the vectors f (r) i as columns F (r) = f (r) 1 , . . . , f (r) d ∈ CMr×d , (2.16) the PARAFAC decomposition of a tensor A ∈ CM1×M2×···×MR with model order d can be rewritten as A = IR,d ×1 F (1) ×2 F (2) · · · ×R F (R) , (2.17) where IR,d is the R-dimensional identity tensor of size d × d × . . . × d. Its elements are equal to one for indices i1 = i2 = . . . = iR and zero otherwise. Comparing (2.17) with the HOSVD low-rank approximation (2.14), the core tensor is replaced by the ”diagonal” identity tensor via PARAFAC decomposition. The dimensions are thus completely decoupled. Figure 2.6 illustrates the PARAFAC decomposition for an order-3 tensor; first as a sum of rank-one tensors according to (2.15), then as r-mode products based decomposition (2.17). Fig. 2.6. Illustration of PARAFAC decomposition for a 3-way tensor. Above: representation as a sum of rank-one tensors; below: r-mode products based decomposition. There exist iterative solutions for accomplishing the PARAFAC decomposition, such as Mul- tilinear Alternating Least Squares (MALS) [5]. However, the MALS algorithm is not suitable for the case that the factor matrices are rank deficient [6]. Moreover, it has a high computational com- plexity and the convergence is not guaranteed, since it is an iterative solution. The solution used in
  • 15. 2.3 Subspace-based Decomposition of Tensors 15 the thesis is Closed-form PARAFAC (CFP) [6], which uses several simultaneous matrix diagonal- izations based on the HOSVD. The problem here is the computationally expensive task of finding the correct factor matrix estimates out of a large set of estimates. However, the computational complexity of the CFP can be drastically reduced by computing only one solution.
  • 16. 3. Data Model The tensor notation introduced in Section 2 is a convenient way to represent multi-dimensional signals sampled from antenna grids. For our data model, we assume that d superimposing planar wavefronts are captured by an R-dimensional (R-D) grid with Mr sensors in each dimension r ∈ {1, . . . , R}. These dimensions can, e.g., be the horizontal and vertical axis of the transmitter and receiver array, or frequency bins. Each dimension r represents a spatial frequency µ (r) i to be estimated for each path i, i = 1, . . . , d. The spatial frequencies correspond to physical parameters such as elevation or azimuth angle of the direction-of-departure or direction-of-arrival, time delay of arrival or Doppler shift. At a sampling time instant n and sensor m1, . . . , mR, we obtain the single measurement xm1,...,mR,n = d i=1 si,n · R r=1 e(mr−1)j·µ (r) i + nm1,...,mR,n , (3.1) where si,n are the complex symbols from the i-th source at snapshot n. The noise elements nm1,...,mR,n are i.i.d. ZMCSCG (zero-mean circularly-symmetric complex Gaussian) with vari- ance σ2 n. Note that in Section 4, this noise is assumed to be white, whereas the colored noise case is considered in Section 5. The data are collected in N consecutive time instants, called snapshots. The model order d, that is, the number of principal multipath components, is assumed to be known. It can be estimated by using multi-dimensional model order selection schemes [7]. Furthermore, we assume that d ≤ N and d ≤ Mmax (overdetermined system). The signal is taken to be narrowband such that the antenna element spacing do not exceed half a wavelength. Figure 3.1 shows an example of a measurement grid in form of a 2-dimensional outer-product array (OPA), where all distances ∆ (r) i for i = 1, 2, 3 and r = 1, 2 can take different values. 3.1 Matrix Notation For matrix notation, the measurements have to be aligned into a matrix which is accomplished by appropriate stacking. If we capture measurements over N subsequent time instants and stack each 16
  • 17. 3.1 Matrix Notation 17 Δ x1,3 x2,3 1 x3,3 Δ2 x1,2 x2,2 x3,2 x1,1 x2,1 x3,1 1 (1) (1) Δ11 (2) Δ11 (2) Δ2 (2) Fig. 3.1. 2-dimensional outer-product based array (OPA) of size 3 × 3. snapshot into a column of a matrix, one obtains for the measurement matrix X ∈ CM×N X =              x1,1,...,1,1,1 x1,1,...,1,1,2 . . . x1,1,...,1,1,N x1,1,...,1,2,1 x1,1,...,1,2,2 . . . x1,1,...,1,2,N ... ... ... ... x1,1,...,1,MR,1 x1,1,...,1,MR,2 . . . x1,1,...,1,MR,N x1,1,...,2,1,1 x1,1,...,2,1,2 . . . x1,1,...,2,1,N x1,1,...,2,2,1 x1,1,...,2,2,2 . . . x1,1,...,2,2,N ... ... ... ... xM1,M2,...,MR−1,MR,1 xM1,M2,...,MR−1,MR,2 . . . xM1,M2,...,MR−1,MR,N              (3.2) where M = R r=1 Mr. The additive noise sample can be summarized in a noise matrix N ∈ CM×N which is stacked in the same fashion as X. Using matrix-vector notation for the data model (3.1), one obtains X = A · S + N , (3.3) where S =      s1,1 s1,2 . . . s1,N s2,1 s2,2 . . . s2,N ... ... ... sd,1 sd,2 . . . sd,N      ∈ Cd×N (3.4) is the symbol matrix, and A ∈ CM×d is the so-called joint array steering matrix whose columns contain the array steering vectors a (µi), i = 1, . . . , d as given in A = [a (µ1) , a (µ2) , . . . , a (µd)] (3.5) with µi = µ (1) i , µ (2) i , . . . , µ (R) i T . That is, the i-th column of A only contains the R spatial frequencies µ (r) i , r = 1, . . . , R belonging to path i. The array steering vectors can explicitly be calculated as the Kronecker products (matrix outer product, see A1) of the array steering vectors of the separate modes through a (µi) = a(1) µ (1) i ⊗ a(2) µ (2) i ⊗ . . . ⊗ a(R) µ (R) i . (3.6)
  • 18. 18 3. Data Model The vectors a(r) µ (r) i ∈ CMr×1 denote the response of the array in the r-th mode due to the i-th wavefront. As an example, for a Uniform Rectangular Array (URA), which is an OPA (Fig. 3.1) with constant distances over a mode with Mr sensors, we have that a(r) µ (r) i =         1 ej·µ (r) i e2·j·µ (r) i ... e(Mr−1)·j·µ (r) i         . (3.7) 3.2 Tensor Notation A more natural way to can capture the samples (3.1) over N subsequent time instants is by arrang- ing them as an R+1-dimensional measurement tensor X ∈ CM1×...×MR×N . Similarly to (3.3), the tensor notation reads as X = A ×R+1 ST + N , (3.8) where A ∈ CM1×...×MR×d is the array steering tensor of an outer product array (OPA) as in Figure 3.1 given by A = d n=1 a(1) µ (1) i ◦ a(2) µ (2) i ◦ . . . ◦ a(R) µ (R) i . (3.9) S ∈ Cd×N is the same transmitted symbol matrix as in (3.4), and N ∈ CM1×...×MR×N the noise tensor. Similarly to the procedure in Section 2.3.3, where (2.15) has a structure as (3.9), the array steering tensor can also be stated as A = IR+1,d ×1 A(1) ×2 A(2) . . . ×R A(R) (3.10) where A(r) ∈ CMr×d comprises of A(r) = a(r) µ (r) 1 , a(r) µ (r) 2 , . . . , a(r) µ (r) d . (3.11) The following relations between the matrix notation from Section 3.1 and the presented tensor notation hold: A = [A] T (R+1) , (3.12) N = [N] T (R+1) , (3.13) X = [X] T (R+1) , (3.14) i.e., the measurement matrix X is equal to the transpose of the unfolding of the measurement tensor X along the last mode. The above steps are also referred to as stacking operations.
  • 19. 4. R-D Parameter Estimation In this section, multi-dimensional parameter estimation schemes based on subspace decomposition are presented, where the signal and noise subspaces of the measurement tensor X as in (3.8) are separated. The number of principal path components d can be estimated according to Model Order Selection (MOS) schemes such as [7]. The three presented R-dimensional parameter estimation techniques are R-D Standard ESPRIT (R-D SE), R-D Standard Tensor ESPRIT (R-D STE) – both of which can only be applied if the shift invariance property [1] holds – and finally closed-form PARAFAC Parameter Estimation (CFP-PE). Figure 4.1 delivers an overview of all three discussed schemes and shall help the reader follows the steps presented in the following subsections. Measurement Tensor HOSVD low-rank decomposition SVD low-rank decomposition PARAFAC decomposition via CFP Signal subspace tensorSignal subspace matrix Measurement Tensor stacking operation Shift Invariance (SI) equations Peak Search (PS) Factor matrices Standard Tensor- ESPRIT (STE) Standard ESPRIT (SE) Closed-Form PARAFAC based Parameter Estimation (CFP-PE) Fig. 4.1. R-D Standard ESPRIT (R-D SE), R-D Standard Tensor-ESPRIT (R-D STE) and Closed-Form PARAFAC based Parameter Estimation (CFP-PE). 4.1 R-D Standard ESPRIT (R-D SE) Via the stacking operation (3.14), the measurement tensor X is reshaped to a matrix X ∈ CM×N where R r=1 Mr . The signal subspace is computed via a low-rank Singular Value Decomposi- 19
  • 20. 20 4. R-D Parameter Estimation tion (SVD) according to (2.8) as X ≈ UsΣsV s H , (4.1) where Σs ∈ Rd×d . Note that the prime symbol is dropped for notational convenience. By exploit- ing the shift invariance of the antenna array, a low-computational closed-form expression for the parameter estimation can be deduced [2]. 4.2 R-D Standard Tensor-ESPRIT (R-D STE) This method employs the actual measurement tensor X and separates the signal and noise sub- spaces via HOSVD low-rank approximation according to (2.14) as X ≈ S[s] ×1 U [s] 1 . . . ×R U [s] R ×R+1 U [s] R+1 , (4.2) where S(s) ∈ Cr1×...×rR+1 is the core tensor and U[s] r ∈ CMr×rr the subspace matrix of the r-th dimension, and rr = min(Mr, d) is the r-rank of X. The signal subspace tensor U[s] ∈ CM1×M2×...×MR×d is therefore U[s] = S[s] ×1 U [s] 1 . . . ×R U [s] R . (4.3) Again, exploiting the shift-invariance structure here, we can build R shift invariance matrices according to [2]. 4.3 Closed-Form PARAFAC based Parameter Estimation (CFP-PE) The Closed-Form PARAFAC based Parameter Estimation (CFP-PE) scheme has been proposed in [7]. The measurement tensor X is decomposed via PARAFAC according to (2.17): X = IR+1,d ×1 F (1) ×2 F (2) . . . ×R F (R) ×R+1 F (R+1) , (4.4) where IR+1,d is the R + 1-dimensional identity tensor and each dimension has size d. The factor matrices F (r) ∈ CMr×d are found via the closed-form PARAFAC solution presented in [6]. Comparing with the tensor data model (3.8) and (3.10), one can see that the factor matrices F (r) provide estimates for the system’s steering matrices A(r) and symbol matrix S: X ≈ IR+1,d ×1 A(1) . . . ×R A(R) ×R+1 ST (4.5) Thus, through PARAFAC decomposition, we are able to find estimates for the correct structure of the steering matrices A(r) , regardless of whether the sensor grid fulfils the shift-invariance or not. This guarantees the flexibility of this scheme regarding the chosen sensor array structure, and leads to an increased robustness. Furthermore, the CFP decouples the multi-dimensional data into vectors corresponding to a certain dimension and source. Therefore, after the CFP, a multi-dimensional problem is trans- formed into several one-dimensional problems. These one dimensional problems can be solved via Peak Search (PS) or via Shift Invariance (SI) if applicable for the given sensor grid. Moreover, the CFP-PE allows to introduce a step called merging dimensions, which is applied to increase the model order. A subsequent Least Squares Khatri-Rao Factorization (LSKRF) is used to refactorize the merged factor matrices.
  • 21. 5. R-D Prewhitening In this section, state-of-the-art tensor-based prewhitening schemes are presented, namely Sequen- tial Generalized SVD (S-GSVD) and its iterative counterpart I-S-GSVD. The former can be ap- plied if a noisy-only measurement for the estimation the noise statistics are available, while the latter scheme can deliver improved estimates even without any information about the noise. From now on, we thus assume that the additive noise component from (3.8) is colored X = A ×R+1 S T + N(c) (5.1) and that the colored noise tensor N(c) ∈ CM1×...×MR×N has a Kronecker structure, as can be found in certain EEG [8] and MIMO applications [9]. The colored noise tensor can thus be stated as N(c) (R+1) = [N](R+1) · (L1 ⊗ L2 ⊗ . . . ⊗ LR)T , (5.2) where ⊗ is the Kronecker product (see A1), N is a white noise tensor collecting i.i.d. ZMC- SCG noise samples with variance σ2 n, and Lr ∈ CMr×Mr , r = 1, . . . , R are the so-called noise correlation factors of the r-th dimension of the colored noise tensor. As proven in [10], (5.2) can be rewritten as N(c) = N ×1 L1 ×2 L2 . . . ×R LR , (5.3) with N ∈ CM1×...×MR×N denoting a white (=uncorrelated) ZMCSCG noise tensor. Please note that while the noise tensor N is R+1-dimensional, there are only correlation matrices for the first R dimensions as we assume that the time samples are uncorrelated. Alternatively, one can say that LR+1 is given to be an identity matrix, which has no effect on the noise tensor. The noise covariance matrix on the r-th mode Rr is defined as E N (c) (r) · N (c) H (r) = α · Rr = α · Lr · LH r , (5.4) where α is a normalization constant, such that tr(Lr · LH r ) = Mr. The equivalence be- tween (5.2), (5.3) and (5.4) is shown in [10]. 5.1 Sequential GSVD (S-GSVD) The Sequential GSVD prewhitening scheme was proposed in [10]. As presented in the following, it consists of two steps: first, the estimation of the correlation factors Lr from the noise-only measurement tensor N (c) . Then, the actual prewhitening scheme can be applied. 21
  • 22. 22 5. R-D Prewhitening 5.1.1 Prewhitening Correlation Factor Estimation (PCFE) In order to apply the S-GSVD prewhitening scheme, the correlation factors Lr must be estimated first from the noise-only measurement tensor N(c) . Dropping the expectation operator from (5.4), one can estimate the noise covariance matrix Rr for each dimension r = 1, . . . , R by ˆRr = α′ · N (c) (r) · N (c) H (r) = ˆLr · ˆL H r , (5.5) where again α′ is chosen such that tr( ˆRr) = Mr. These estimates then need to be factorized to obtain the correlation factor estimates ˆLr, e.g. directly via a Cholesky decomposition or via eigenvalue decomposition (EVD) ˆRr = Qr · Λ · QH r , (5.6) from which follows that ˆLr = Qr · Λ 1 2 . (5.7) 5.1.2 Tensor Prewhitening Scheme: S-GSVD Once the estimates ˆL1, . . . , ˆLR ∈ CMr×Mr of the correlation factor matrices are computed through (5.5), the S-GSVD prewhitening scheme can be executed as follows (see also Figure 5.2): 1) Prewhiten the measurement tensor X ∈ CM1×M2×MR×N : ˜X = X ×1 ˆL −1 1 ×2 ˆL −1 2 . . . ×R ˆL −1 R (5.8) Note that due to the uncorrelatedness between the time instants, we have only R correlation factors, while the measurement tensor has R + 1 dimensions. By substituting our coloured measurement tensor (5.1) in (5.8) ˜X = A ×R+1 S T + N (c) ×1 ˆL −1 1 ×2 ˆL −1 2 . . . ×R ˆL −1 R (5.9) = A ×1 ˆL −1 1 ×2 ˆL −1 2 . . . ×R ˆL −1 R ×R+1 S T + N (5.10) while taking into account the Kronecker model of the coloured noise tensor (5.3), the multi- dimensional noise component becomes white. However, the signal component of X has been distorted through the prewhitening. This must be accounted for in a later dewhitening step. 2) Compute the HOSVD low-rank approximation (2.14) of ˜X ˜X ≈ S[s] ×1 U [s] 1 ×2 U [s] 2 . . . ×R U [s] R ×R+1 U [s] R+1 , (5.11) such that that corresponding subspace tensor ˜U [s] is ˜U [s] = S[s] ×1 U [s] 1 ×2 U [s] 2 . . . ×R U [s] R , (5.12) where S[s] ∈ Cp1×p2×...×pR×d , U[s] r ∈ CMr×pr such that pr = min (Mi, d) for r = 1, . . . , R. We assume again that d ≤ N 3) Dewhiten the estimated subspace in order to reconstruct the signal subspace: U[s] = ˜U [s] ×1 ˆL1 ×2 ˆL2 . . . ×R ˆLR (5.13)
  • 23. 5.2 Iterative Sequential GSVD (I-S-GSVD) 23 Prewhitening Estimate Parameters (STE, CFP-PE) HOSVD low-rank approximation Dewhitening Estimate via PCFE S-GSVD Fig. 5.1. Basic steps of S-GSVD prewhitening scheme with Prewhitening Correlation Factor Estimation (PCFE). With the new correctly dewhitenened subspace tensor U[s] , the parameters can be estimated according to the Standard Tensor-ESPRIT or CFP based parameter estimation (CFP-PE) scheme (see Sections 4.2 and 4.3). Originally, the S-GSVD was derived by applying multiple GSVDs [13] to the measurement tensor – hence the name sequential GSVD. In this way, the matrix inversions in the prewhitening step (5.8) can be avoided. However, the procedure presented above is more accurate than the original S-GSVD and therefore preferable. 5.2 Iterative Sequential GSVD (I-S-GSVD) If the second-order statistics of the noise cannot be estimated, e.g., if only a small number of noise snapshots is available, or if the noise cannot be measured without the presence of the signal component, then Iterative Sequential GSVD (I-S-GSVD) can be used, which was proposed in conjunction with STE in [12] . The principle idea is to apply the prewhitening correlation factor estimation (PCFE) from Section 5.1.1 iteratively to compute estimates ˆLr of the correlation factors Lr. The concept of the I-S-GSVD prewhitening scheme is depicted in Figure 5.2. Contrarily to [12], the I-S-GSVD concept was expanded by the option to chose CFP-PE in the parameter estimation step. This conjunction of I-S-GSVD and CFP-PE has not yet been investigated in the literature and will be scrutinized in the simulations of Section 6. The I-S-GSVD algorithm works as follows: 1) Initialize ˆLr as Mr × Mr identity matrices for r = 1, . . . , R. 2) Do S-GSVD from Section 5.1.2. 3) Estimate parameters ˆµ (r) i via STE or CFP-PE (see Sections 4.2 and 4.3).
  • 24. 24 5. R-D Prewhitening Initialize Sequential GSVD (Section 5.2) Estimate parameters (STE, CFP-PE) Estimate signal matrix and steering tensor Update noise tensor Estimate new using PCFE Fig. 5.2. Basic steps of I-S-GSVD iterative prewhitening scheme. 4) From the obtained ˆµ (r) i , estimate the array steering tensor ˆA according to the model in (3.9). Using X and ˆA, calculate the signal matrix: ˆS = [X](R+1) · ˆA + (R+1) T , (5.14) where + is the Moore-Penrose pseudo inverse. 5) Given ˆA and ˆS, compute an estimate the noise tensor: ˆN (c) = X − ˆA ×R+1 ˆST (5.15) 6) From ˆN (c) , update the estimate ˆLr using PCFE (see Section 5.1.1). 7) Go back to step 2. According to [12], the root mean square change (RMSC) of the parameter estimates ˆµ (r) i be- tween two iteration can be applied as a stopping criteria. Via simulations in conjunction with STE, it could be shown that between two and three iterations are always sufficient to achieve conver- gence. This fact could also be verified for the I-S-GSVD in conjunction with the CFP-PE, as will be shown in the following section.
  • 25. 6. Simulation Results In this section, simulations carried out in MATLAB shall demonstrate the performance of the dis- cussed multi-dimensional parameter estimation techniques and prewhitening schemes using the R-D harmonic retrieval model of (3.8), where the spatial frequencies µ (r) i are drawn from a uni- form distribution in [−π, π]. The source symbols are i.i.d. ZMCSCG distributed with power equal to σ2 s for all the sources. The SNR at the receiver is defined as SNR = 10 · log10 σ2 s σ2 n , (6.1) where σ2 n is the variance of the elements of the white noise tensor N in (3.8) for Section 6.1 and (5.3) for Section 6.2. For all simulations that were executed, a scenario with the specifications shown in Table 6.1 was employed. Five different parameters were estimated, rendering the scenario into a 5-D parameter estimation problem. Variable Estimated parameter Transmitter antenna array M1 = 4 µ (1) i : DOD azimuth of size 4 × 4 M2 = 4 µ (2) i : DOD elevation Receiver antenna array M3 = 4 µ (3) i : DOA azimuth of size 4 × 4 M4 = 4 µ (4) i : DOA elevation Number of frequency bins M5 = 4 µ (5) i : path delay Number of snapshots N = 4 Number of paths d = 3 Table 6.1. Specification of simulated 5-D scenario. If a simulation is carried out with L realizations for each SNR value, the overall RMSE reads as RMSE = E R r=1 d i=1 µ (r) i − ˆµ (r) i 2 . (6.2) 6.1 White Noise Case First, different tensor-based parameter estimation schemes are briefly compared for a white-noise scenario in Figure 6.1. The RMSE was plotted versus the SNR according to (6.2). One can see that 25
  • 26. 26 6. Simulation Results the tensor-based schemes standard tensor-ESPRIT (STE) (Section 4.2) and closed-form PARAFAC based (CFP-PE) (Section 4.3) have an improved performance over the ordinary standard ESPRIT (SE). However, comparing all schemes with the Cramer-Rao bound [14], there is still room for improvement. −5 0 5 10 10 −2 10 −1 10 0 SNR [dB] RMSE SE 5−D STE CFP−PE Det. CRB Fig. 6.1. RMSE vs. SNR for the white noise case for L = 50 runs. 6.2 Colored Noise Case In this section, a colored noise is generated according to (5.3). The main goal is to investigate the performance of the not yet investigated I-G-SVD prewhitening method in conjunction with CFP-PE from Section 5.2, from now denoted as I-S-CFP-PE. The new scheme is assessed against the plain non-iterative S-GSVD prewhitening scheme joined with CFP-PE, abbreviated by S-CFP- PE, as well as plain CFP-PE without prewhitening. In the simulations, it is considered that the elements of the noise covariance matrix in the r-th mode Rr = Lr · LH r vary as a function of the correlation coefficient ρr, similarly as in [10]. As an example the structure of Rr as a function of ρr for Mr = 3 is given as Rr =   1 ρ∗ r (ρ∗ r)2 ρr 1 ρ∗ r ρr 2 ρr 1   , (6.3) where ρr is the correlation coefficient of the r-th mode. However, in the following simulations the correlation coefficients were constant over all correlated dimensions with ρ = ρr∀r = 1, . . . , R. Note that also other types of correlation models can be used. To be consistent with (5.4), Lr is normalized such that tr(Lr · LH r ) = Mr. Again, the RMSE is computed according to (6.2). First of all, the convergence of I-S-CFP-PE is scrutinzed in Figure 6.2. One can see that convergence is reached after only three iterations, which is remarkable. Although there is a small remaining gap to the S-CFP-PE scheme, the RMSE compared to the non-prewhitening scheme
  • 27. 6.2 Colored Noise Case 27 CFP-PE can be improved by a factor of ten for the given scenario with correlation coefficient ρ = 0.9. It is expected that the remaining error vanishes for an increasing number of snapshots N, however this was impossible to simulate due to the great computational complexity. Next, the performance of these scheme is tested over a wide SNR range in a colored noise scenario with correlation coefficient ρ = 0.9 in Figure 6.3. In the low-SNR region, the I-S-CFP-PE delivers only marginally better estimates than the non-prewhitening scheme. Again, it is expected that this gap to the S-CFP-PE scheme error would decrease significantly for an increased number of snapshots N. In the high-SNR region, the I-S-CFP-PE performs very close to the non-iterative S-CFP-PE, and is thus able to successfully estimate the noise correlation factors. In Figure 6.4, it is looked into the performance over a varying correlation coefficient. For low ρ, that is, low correlation over all dimensions, all three schemes perform equally well. For high correlation, the estimate can be improved drastically by the prewhitening schemes. At the chosen SNR of 20dB, the I-S-GSVD is always very close to the performance of the non-iterative scheme. Finally, in Figure 6.5, the performance of the schemes is assessed for a positioning error sce- nario. While all before mentioned simulations were executed in shift-invariant outer product ar- rays, the sensor array is made non-shift invariant in this simulation. To this end, the antennas of the first two dimensions, e.g., the 2-dimensional receiver antenna array, are randomly misplaced with an positioning error variance ρp. For this scenario, CFP combined with Peak Search (PS) can be successfully applied, while the CFP utilizing shift invariance (SI) naturally performs as bad as standard tensor-ESPRIT S-GSVD prewhitening. 1 2 3 4 5 6 7 8 10 −3 10 −2 10 −1 10 0 Iterations RMSE CFP−PE w/o PWT S−CFP−PE I−S−CFP−PE Fig. 6.2. RMSE vs. Iterations with SNR= 15dB, correlation coefficient ρ = 0.9 and L = 20 runs.
  • 28. 28 6. Simulation Results −10 0 10 20 30 40 50 60 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 1 SNR[dB] RMSE CFP−PE w/o PWT S−CFP−PE I−S−CFP−PE Fig. 6.3. RMSE vs. SNR with correlation coefficient ρ = 0.9, K = 4 iterations and L = 20 runs. 0 0.2 0.4 0.6 0.8 1 10 −4 10 −3 10 −2 10 −1 Correlation coefficient ρ i RMSE CFP−PE w/o PWT S−CFP−PE I−S−CFP−PE Fig. 6.4. RMSE vs. Correlation coefficient with SNR= 20dB, K = 4 iterations and L = 20 runs.
  • 29. 6.2 Colored Noise Case 29 10 −4 10 −2 10 0 10 −4 10 −3 10 −2 10 −1 10 0 10 1 Variance σ p RMSE CFP−PE w/o PWT S−CFP−PE (SI) S−CFP−PE (PS) I−S−CFP (PS) STE S−GVD Fig. 6.5. RMSE vs. Array Spacing Variance with SNR= 40dB, correlation coefficient ρ = 0.9, K = 4 iterations and L = 15 runs.
  • 30. 7. Conclusions The tensor-based parameter techniques presented in this thesis achieve an improved accuracy compared to matrix-based schemes. The advantage of ESPRIT-based schemes is their low com- putational complexity through the closed-form shift-invariance equation, while the closed-form PARAFAC based parameter estimation can be praised for its applicability to mixed array geome- tries and the robustness to arrays with positioning errors. For scenarios with Kronecker colored noise, the results show that the proposed tensor-based prewhitening improves remarkably the estimation accuracy of the plain CFP parameter estimator, while retaining its advantages as written above. Simulations assessed the performance of the proposed iterative tensor-based S-GSVD prewhitening in conjunction with a CFP based parameter estimator. This iterative algorithm can achieve both a very good estimation of signal parameters and of the noise variance given a large number of snapshots. The iteration converges very fast and the remaining error is small. The per- formance is very close to estimation obtained using the non-iterative S-GSVD prewhitening with knowledge of the noise covariance information. Pointers for future research could be the carry-out of simulations with more advanced and realistic channel models, e.g. in geometry-based scenarios. 30
  • 31. Appendix A1 The Kronecker product The Kronecker product is the outer product of two matrices. Given A ∈ CM×N and B ∈ CP ×Q , the Kronecker product is the block matrix A ⊗ B =    a11B . . . a1N B ... ... ... aM1B ... aMN B    ∈ C(M·P )×(N×Q) . (A1) 31
  • 32. Bibliography [1] Roy, R.; Kailath, T.: ESPRIT – estimation of signal parameters via rotational invariance techniques. – In: IEEE Transactions on Acoustics, Speech, and Signal Processing 37 (Juli 1989), S. 984–995. [2] Haardt, M.; Roemer, F.; Galdo, G. D.: Higher-order SVD based subspace estimation to im- prove the parameter estimation accuracy in multi-dimensional harmonic retrieval problems. – In: IEEE Transactions on Signal Processing 56 (Juli 2008), S. 3198–3213. [3] De Lathauwer, L.; De Moor, B.; Vandewalle, J.: A multilinear singular value decomposi- tion. – In: SIAM J. Matrix Anal. Appl. 21(4) (2000). [4] Cattell, R. B.: Parallel proportional profiles and other principles for determining the choice of factors by rotation. – In: Psychometrika 9 (Dez. 1944). [5] Bro, R.; Sidiropoulos, N.; Giannakis, G. B.: A fast least squares algorithm for separating trilinear mixtures. – In: Proc. Int. Workshop on Independent Component Analysis for Blind Signal Separation (ICA 99), Jan. 1999. S. 289–294. [6] Roemer, F.; Haardt, M.: A closed-form solution for multilinear PARAFAC decompositions. – In: Proc. 5th IEEE Sensor Array and Multich. Sig. Proc. Workshop (SAM 2008), Darmstadt, Germany, Juli 2008. S. 487–491. [7] da Costa, J. P. C. L.; Roemer, F.; Weis, M.; Haardt, M.: Robust R-D parameter estimation via closed-form PARAFAC. – In: International ITG Workshop on Smart Antennas (WSA 2010), Dresden, Germany, Marz 2010. S. 99–106. [8] Huizenga, H. M.; de Munck, J. C.; Waldorp, L. J.; Grasman, R. P. P. P.: Spatiotemporal eeg/meg source analysis based on a parametric noise covariance model. – In: IEEE Transac- tions on Biomedical Engineering 49 (Juni 2002), S. 533 – 539. [9] Park, B.; Wong, T. F.: Training sequence optimization in MIMO systems with colored noise. – In: Military Communications Conference (MILCOM 2003), Gainesville, USA, Okt. 2003. [10] da Costa, J. P. C. L.; Romer, F.; Haardt, M.: Sequential GSVD based prewhitening for multi- dimensional HOSVD based subspace estimation. – In: Proc. International ITG Workshop on Smart Antennas (WSA 2009), Berlin, Germany, Feb. 2009. [11] da Costa, J. P. C. L.; Haardt, M. et al.: Robust R-D parameter estimation via closed-form PARAFAC in kronecker colored environments. – In: International Symposium on Wireless Communication Systems (ISWCS 2010), York, United Kingdom, Sep. 2010. S. 115–119. [12] da Costa, J. P. C. L.; Roemer, F.; Haardt, M.: Iterative sequential GSVD (I-S-GSVD) based prewhitening for multidimensional HOSVD based subspace estimation without knowledge of the noise covariance information. – In: International ITG Workshop on Smart Antennas (WSA 2010), Dresden, Germany, Marz 2010. S. 151–155. 32
  • 33. Bibliography 33 [13] Vandewalle, J.; Lathauwer, L. D.; Comon, P.: The generalized higher order singular value decomposition and the oriented signal-to-signal ratios of pairs of signal tensors and their use in signal processing. – In: European Conference on Circuit Theory and Design, Krakow, Poland, Sep. 2003. [14] Stoica, P.; Nehorai, A.: Music, maximum likelihood, and cramer-rao bound. – In: IEEE Transactions on Acoustics, Speech, and Signal Processing 37 (Mai 1989), S. 720 – 741.