Fundamental Materials and Tools 21
where c
i, j
and c
$
i, j
represent the code elements in the ith rows and j th columns of C
and C
$
,respectively,“
ˆ
”denotesamodulo-L addition, and ”denotesamodulo-N
addition.
For the 2-D in-phase cross-correlation function,
τ
1
and
τ
2
are set to their own
fixed values, such as
τ
1
= 0and
τ
2
= 0.
For the 2-D aperiodic cross-correlation function, the modulo-L and modulo-N
additions are replaced by ordinary additions so that the shifts are noncyclic.
Setting C = C
$
, Θ
C,C
(
τ
1
,
τ
2
) defines the 2-D discrete autocorrelation function of
C.When
τ
1
=
τ
2
= 0, this function gives the autocorrelation peak, which is equal
to the Hamming (or code) weight. Otherwise, it gives the autocorrelation sidelobe at
τ
1
-and
τ
2
-shifts.
The maximum cross-correlation function is the largest cross-correlation value that
can be obtained between any two correlating L ×N matrices, say C an d C
$
,inthe
code set and defined as
Θ
max
= max{Θ
C,C
$
(
τ
1
,
τ
2
), for all
τ
1
[0,L 1] and
τ
2
[0,N 1]}
The average cross-correlation function between two L ×N matrices is taken by
averaging the sum of their cross-correlation values obtained from all L
τ
1
-shifts and
N
τ
2
-shifts and given by
Θ
C,C
$
=
1
LN
L1
τ
1
=0
N1
τ
2
=0
Θ
C,C
$
(
τ
1
,
τ
2
)
The 2-D periodic cross-correlation function between any two L ×N matrices (i.e.,
codewords) counts the number of positions where bo th L×N matrices have the same
nonzero code elements at any
τ
1
and
τ
2
cyclic-shifts. For example, if the code set
contains binary (0,1) L ×N matrices, the nonzero code elements are simply binary
1s. The aperiodic definition implies that the
τ
1
-and
τ
2
-shifts are noncyclic. In these
2-D definitions,
τ
1
and
τ
2
are referred to as two different kinds of shifts. For example,
if
τ
1
is for wavelength shift, then
τ
2
is for time shift, or vice versa. The in-phase cross-
correlation function is just one value because it is ju st one case, usually
τ
1
= 0and
τ
2
= 0, in the periodic cross-correlation function.
1.6 CARDINALITY UPPER BOUND
The upper bou nd o f the cardinality (or cod e size) of a family of1-Dor2-Dop-
tical codes depends on the code length N,codeweightw,thenumberofwave-
lengths L,themaximumautocorrelationsidelobe
λ
a
,andthemaximumperiodic
cross-correlation function
λ
c
in use [17, 18, 21, 22, 44].
22 Optical Coding Theory with Prime
Theorem 1.7
The cardinality upper boun d of any family of 2-D (binary u n ipolar) optical codes is
formulated as
Φ(L ×N,w,
λ
a
,
λ
c
)
L(LN 1)(LN 2) ···(LN
λ
c
)
λ
a
w(w 1)(w 2)···(w
λ
c
)
where
λ
c
> 0and
λ
a
> 0. For 1-D (binary unipolar) optical codes, L = 1isset.
Proof The d erivation is based on Johnson’s cardinality bound of 1-Dopticalor-
thogonal codes [17, 18, 44]. By computing the cyclic separations of adjacent pulses
in every codeword (of weight w)inthecodeset,w cyclic adjacent-pulse separa-
tions per codeword can be obtained. To have
λ
c
as the maximum periodic cro ss-
correlation function, the cross-correlation set for each codeword in the code set is
obtained by grouping every
λ
c
of the consecutive cyclic adjacent-pulse separations
of the codeword in accordance with the method in [45]. As a result, there exist a
total of w
-
w1
λ
c
.
/(
λ
a
L) disjoint elements in the cross-correlation set of each code-
word [46], and the cross-correlation sets of all codewords must be distinct. The factor
L accounts for the fact that the cr o ss-correlation process is performed cyclically in
the horizontal (i.e., time) direction only. Because there are
-
LN1
λ
c
.
possible ways to
select
λ
c
elements in the cross-correlation sets, the card inality u pper bound is given
by
Φ(L ×N,w,
λ
a
,
λ
c
)
-
LN1
λ
c
.
w
-
w1
λ
c
.
/(
λ
a
L)
where
-
x
y
.
= x!/[y!(x y)!] for integers x y 0. After some manipulations, the
final form of the theorem is derived.
For example, the cardinality of the carrier-hopping prime codes in Section 5.1,
which have L = w p
1
, N = p
1
p
2
···p
k
,
λ
a
= 0, and
λ
c
= 1, is upper-bounded by
Φ(w × p
1
p
2
···p
k
,w,0,1) Φ(w × p
1
p
2
··· p
k
,w,1,1)
w(wp
1
p
2
···p
k
1)
w(w 1)
= p
1
p
2
···p
k
+
p
1
p
2
···p
k
1
w 1
according to Theorem 1.7, where p
1
p
2
···p
k
are prime numbers. For compar-
ison, the actual cardinality of the carrier-hopping prime codes is given in Section 5.1
as p
1
p
2
···p
k
,whichapproachestheupperboundforalargew.So,thecodesaresaid
to have asymptotic optimal cardinality.
Fundamental Materials and Tools 23
1.7 MARKOV CHAIN
To analyze the performance of optical codes, the cross-correlation function plays
an impor tan t ro le because it counts the num b er of pu lse overlaps (or so-called hits)
between any two correlating codewords. For code weight w,thereareuptow pulse
positions for the hits to occur, and these positions can be modeled as the states of
a Markov chain [24, 25 ]. The probability of changing the number of h its fro m one
value to another value can be computed from the probability oftransferringfromone
state to another state in such a Markov chain.
Containing a finite number of states, a Markov chain is a discrete r andom process
with a ( memoryless) Markov prop e rty. T he Markov property states that the con-
ditional probability distributions of states are all independent and the conditional
probability of next state depends only on the present state. Let random variables
{X
0
,X
1
,X
2
,...,X
i
} represent the independent probability d istributions of i + 1states,
labeled with distinct integers ranging from 0 to i.TheMarkovpropertyimpliesthat
Pr(X
i
= x
i
| X
i1
= x
i1
,X
i2
= x
i2
,...,X
0
= x
0
)=Pr(X
i
= x
i
| X
i1
= x
i1
).
The changes of state in a Markov chain are called transitions.Theprobabilities
associated with state ch anges are called transition probabilities. Due to the Markov
property, p
i, j
= Pr(X
j
= j | X
i
= i) represents the transition probability of transfer-
ring from state i to state j.Adirectedgraphwithpaths,labeledbythetransition
probabilities, going from one state to other states is commonly used to characterize
aMarkovchain,suchastheoneinFigure1.1.
0 1 2 w
p
0,1
p
1,0
p
0,0
p
1,1
p
2,2
p
1,2
p
0,2
p
0,w
p
0,3
p
1,w
p
2,w
p
3,w
p
1,3
p
2,0
p
3,1
p
3,0
p
3,3
p
2,3
p
3,4
p
w-1,w
p
w-1,3
p
2,4
p
3,5
p
w-2,w
p
0,w
p
w,3
p
w-1,2
p
w-1,1
p
w-1,0
p
w,1
p
w,0
p
w,w
3
...
:
:
:
:
:
:
:
:
:
:
p
w,0
p
w,1
p
w,2
:
:
:
:
:
:
:
:
:
:
p
2,1
p
4,3
p
3,2
p
w,w-1
FIGURE 1.1 State transition diagram of a Markov chain with transition probabilities, p
i, j
,
where i and j [0, w].
Let P denote the transition matrix containing a complete set of thetransitionprob-
abilities of the Markov chain in Figure 1.1 such that
P =
p
0,0
p
0,1
··· p
0,w
p
1,0
p
1,1
··· p
1,w
.
.
.
.
.
.
.
.
.
.
.
.
p
w,0
p
w,1
··· p
w,w
24 Optical Coding Theory with Prime
This square matrix has nonn egative elements. Because the total probability of trans-
ferring o ut from any state must be equal to 1, each row sums to one and
w
j=0
p
i, j
= 1.
To analyze a Markov chain, higher-order transition probabilities are sometimes
computed. The k th-order transition probabilities can be calculated by multiplying
the first-order transition matrix P by itself k times, denoted by [24]
P
k
=
p
(k)
0,0
p
(k)
0,1
··· p
(k)
0,w
p
(k)
1,0
p
(k)
1,1
··· p
(k)
1,w
.
.
.
.
.
.
.
.
.
.
.
.
p
(k)
w,0
p
(k)
w,1
··· p
(k)
w,w
where p
(k)
i, j
is the probability of starting from state i and arriving at state j af ter k
transitions. The probability that any k consecutive transitions taking on particular
values is computed by
Pr(X
k
= x
k
,X
k1
= x
k1
,X
k2
= x
k2
,...,X
0
= x
0
)
= Pr(X
k
= x
k
| X
k1
= x
k1
)Pr(X
k1
= x
k1
| X
k2
= x
k2
)···
×Pr(X
1
= x
1
| X
0
= x
0
)Pr(X
0
= x
0
)
= PP···PPr(X
0
= x
0
)
= P
k
Pr(X
0
= x
0
)
where Pr(X
0
= x
0
) is the initial state probability.
1.8 ALGEBRAIC TOOLS FOR PERFORMANCE ANALYSIS
Gaussian approximation is a q uick estimation method to analyze code performance
by applying the Central Limit Theorem [24–26]. Such an approximation is useful
especially when the structural information of optical codesisnotavailable.Oth-
erwise, combinatorial metho d s usually produce m ore accurate code performance
[21, 22, 27–37].
In general, the choice of signaling waveform of o ptical codesaffectsthedesigns
of transmitter and receiver and, most importantly, the detection p r ocess, as studied in
Chapter 2. There are two basic detection techniques—namely coherent (for optical
field) and incoherent (for optical intensity). While coh erent d etection refers tohow
areceiverdetectsopticalsignalswithknowledgeofphaseinformation, incoherent
detection refers to the case without such knowledge. Operating on optical inten-
sity, incoherent detection generally does not require phasetrackingorsystem-wise
synchronization and hardware complexity of the receiver is reduced, but usually at
the expense of code performance.
Because binary unipolar codes, which contain (0,1) code elements, ar e u sually
transmitted in the form of optical intensity, they require incoherent detection and
are also called incoherent codes. As bipolar codes, which contain (1, +1) code
elements, are usually transmitted in form of optical phase, they require coherent
Fundamental Materials and Tools 25
detection and are also called coherent codes. Because this book studies optical coding
theory with emphasis on prime codes, which are mainly incoherent codes, the term
optical codes generally refers to this kind of code, which does not require phase
tracking or system-wise synchronization, unless stated otherwise.
According to the definition of period ic cross-correlation function, the amount of
interference seen at a receiver involves the correlation of its address codeword with
two consecutive codewords from one interferer in order to obtain one complete cross-
correlation process. This corresponds to the reception of two data bits in series from
the same interferer. So, the error probability seen at the receiver can be written as
P
e
= Pr(erro r |K simultaneous users)
=
1
x=0
1
y=0
Pr(error |K simultaneous users, interferer sends bits xy)p
xy
where p
xy
is the probability of sending a data bit x = {0, 1}followed by a data bit y =
{0,1}.Thereare,intotal,fourdata-bitpatternsand,inturn,four error-probability
terms. Two terms in P
e
,whichcorrespondto(x,y)=(0, 1) and (1,0),arecontributed
by partial cross-correlations and usually difficult to compute, especially when the
cross-correlation function is greater than 1.
For unipolar codes, these two partial-correlation terms areusuallyboundedby
the term of “bits 11 if on-off-keying modulation is assumed [21, 3 0]. So, the error
probability can be simplified as
P
e
1
x=0
Pr(error |K simultaneous users, interferer sends bit x)p
x
where p
x
is the probability of sending a data bit x = {0,1}.
For bipolar codes, the two partial-correlation terms are usually not bounded. For
example, Gold sequences of various lengths have different partial-correlation valu es
[6, 22, 47, 48]. So, other means, such as code length and weight, are used to first
compute the signal-to-interference p ower ratio (SIR) and then the code performance
is formulated in the form of a Gaussian approximation.
In the following sections, some general analytical techniques, based on Gaussian
approximation and combinatorial methods, are formu lated. Special analytical tech-
niques that only apply to specific optical codes are given in their respective chapters.
1.8.1 Gaussian Approximation for Unipolar Codes
Using incoheren t on-off keying (OOK) mo du lation, every user sends ou t a unipolar
codeword corresponding to the address (signatur e) codewordofitsintendedreceiver
for a data bit 1, but nothing is transmitted for a data b it 0. At areceiver,unipo-
lar codewords from all simultaneous users are correlated with the receiver’s address
codeword. If a correct co d eword arrives, an autoco r r elationfunctionwithahighpeak
results. The autocorrelation peak is u sually equal to the code weight. For incorrect
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.137.64