26 Optical Coding Theory with Prime
codewords, cross-correlation functions are generated and they create mutual inter-
ference. The aggregated cross-correlation functions and the autocorrelation function
(if it exists) are then sampled and threshold-detected. If the sum of all functions in
the sampling time, which is usually set at the expected (time)positionoftheauto-
correlation peak, is as high as a predetermined decision-threshold level, a d ata bit 1
is recovered. Otherwise, a bit 0 is decided.
Assume that every simultaneous user sends data bit 0s and 1s with equal but
independent probabilities: p
0
= p
1
= 1/2. Let q
i
denote the probability that the cross-
correlation value between an interfer in g codeword and the address codeword of a
receiver in the sampling time is i an d is defined as
q
i
=
1
x=0
Pr(cross-correlation value = i,receiverreceivesbitx)p
x
= 0.5Pr(cross-corr elation value = i,receiverreceivesbit1)
This kind of probability is commonly referred to as hit probability. The probability
term with bit x = 0isequalto0inOOKbecausenocodewordistransmittedand,
thus, no interference is generated for a data bit 0. The hit probability generally de-
pends on the code parameters, such as weight and length, of theunipolarcodewords
in use and will be given in their respective chapters.
For unipolar codes with the maximum cross-correlation function of
λ
c
,eachinter-
fering codeword may contribute up to
λ
c
pulses (or hits) toward the cross-correlation
function. So, the total number of hits seen by a receiver in thesamplingtimeisgiven
by
λ
c
k=0
kl
k
,wherel
k
represents the number of interfering codewords con tributing
k [0,
λ
c
] hits toward the cross-correlation function. The conditional p robability of
having the correlation value Z =
λ
c
k=0
kl
k
is given by a multinomial distribution as
Pr(Z hits | K users, receiver receives bit 0)=
(K 1 )!
l
0
!l
1
!···l
λ
c
!
q
l
0
0
q
l
1
1
···q
l
λ
c
λ
c
As ther e ar e K 1 =
λ
c
k=0
l
k
interferers (or interfering codewords), the conditional
error probability of the receiver, which is actually receiving a bit 0, can be written as
Pr(error | K simultaneous users, receiver receives bit 0)
= Pr(Z Z
th
| K simultaneous users, receiver recei ves bit 0)
= 1
Z
th
λ
c
k=0
kl
k
=0
(K 1 )!
l
0
!l
1
!···l
λ
c
!
q
l
0
0
q
l
1
1
···q
l
λ
c
λ
c
1
/
Z
th
0
1
0
2
πσ
2
0
e
(xm
0
)
2
2
σ
2
0
dx
1
/
Z
th
1
0
2
πσ
2
0
e
(xm
0
)
2
2
σ
2
0
dx
Fundamental Materials and Tools 27
where Z
th
is the threshold level used to decide whether the receiver is receiving a data
bit 0 or 1. In the derivation, the multinomial distribution isapproximatedasGaussian
with mean
m
0
=(K 1)
λ
c
i=0
iq
i
and variance
σ
2
0
=(K 1)
1
λ
c
i=1
i1
j=0
(i j)
2
q
i
q
j
2
for a sufficiently large K,duetotheCentralLimitTheorem.Itisassumedthatthe
cross-correlation functions of these K 1interferersareindependentandidentically
distributed random variables. As each cross-correlation function takes on a value
ranging from 0 to
λ
c
,theindividualmeanandvariancearethengivenbym =
λ
c
i=0
iq
i
and
σ
2
=
λ
c
i=0
(i j)
2
q
i
=
λ
c
i=1
i1
j=0
(i j)
2
q
i
q
j
,respectively.TheCentralLimit
Theorem states that the probability distribution of the sum of these K 1random
variables approaches a Gaussian distribution when K is sufficiently large [24, 25].
If a codeword , which represents the transmission of a data bit1,arrivesatare-
ceiver with m atch ing address codeword, an autocorrelation function with a high peak
results. The total number of pulses seen by the receiver in thesamplingtimeincludes
the autocorrelation peak and hits (generated by interferingcodewords),andcanbeas
large as w+
λ
c
k=0
kl
k
.Theautocorrelationpeakisusuallyequaltothecodeweight w
of the unipolar codewords in use. So, the conditional probability of having the cor-
relation value Z = w +
λ
c
k=0
kl
k
is also given by a multinomial distribution and the
conditional er r o r probability of the receiver, which is actually receiving a bit 1, can
be written as
Pr(error |K simultaneous users, receiv er receives bit 1)
= Pr(Z < Z
th
| K simultaneous users, receiver receives bit 1)
= 1
w+(K1)
λ
c
w+
λ
c
k=0
kl
k
=Z
th
(K 1)!
l
0
!l
1
!···l
λ
c
!
q
l
0
0
q
l
1
1
···q
l
λ
c
λ
c
1
/
w+(K1)
λ
c
Z
th
1
0
2
πσ
2
1
e
(xm
1
)
2
2
σ
2
1
dx
1
/
Z
th
1
0
2
πσ
2
1
e
(xm
1
)
2
2
σ
2
1
dx
with mean
m
1
= w +(K 1)
λ
c
i=0
iq
i
28 Optical Coding Theory with Prime
and variance
σ
2
1
=
σ
2
0
=(K 1)
1
λ
c
i=1
i1
j=0
(i j)
2
q
i
q
j
2
for a sufficiently large K,duetotheCentralLimitTheorem.
Z
th
m
0
m
1
p(x |0) p(x |1)
x
opt
FIGURE 1.2 Typical relationship of two symmetric Gaussian conditionalprobabilitydensity
functions, p(x|0) and p(x|1),whereZ
opt
th
is the optimal decision threshold [21,26].
To minimize decision error, the optimal decision threshold Z
opt
th
must be deter-
mined. First, consider the conditional probability densityfunctionsinthetwocondi-
tional error probabilities:
p(x|0)=
1
0
2
πσ
2
0
e
(xm
0
)
2
2
σ
2
0
p(x|1)=
1
0
2
πσ
2
1
e
(xm
1
)
2
2
σ
2
1
The general relationship o f these two (symmetric) Gaussian-distributed con -
ditional probability density functions is illustrated in Figure 1.2. The opti-
mal decision threshold Z
opt
th
is usually located at the point of intersection of
these functions. As shown in the figure, the shaded areas to theleftand
right of Z
opt
th
indicate the regions of errors caused by the conditional error
probabilities Pr(error | K simultaneous users, receiver receives bit 1) and Pr(err o r |
K simultaneous users, receiver receives bit 0),respectively.
The optimization of Z
th
involves the so-called likelihood ratio test [26] with the
conditional probability density functions p(x|0) and p(x|1) following the rules:
1. If p(x|0)p(0) > p(x|1)p(1),choosedatabit0.
2. If p(x|1)p(1) > p(x|0)p(0),choosedatabit1.
If p(0)=p(1),theoptimalthresholdoccursatthelocationofx where p(x|0)=
p(x|1).Theoptimalthresholdlevelforminimizingtheerrorprobability is found to
be
x = Z
opt
th
=
m
0
+ m
1
2
Fundamental Materials and Tools 29
Substituting Z
opt
th
into the two conditional error probabilities, they eventually become
Pr(error |K simultaneous users, receiv er receives bit 1)
= Pr(error | K simultaneous users, receiver receives bit 0)
1
/
Z
opt
th
1
0
2
πσ
2
0
e
(xm
0
)
2
2
σ
2
0
dx
= Q
Z
opt
th
m
0
0
σ
2
0
= Q
w
2
0
σ
2
0
where Q(x)=(1/
2
π
)
7
x
exp
-
y
2
/2
.
dy is the complementary error function.
The Gaussian-approximated error probability of u nipo lar codes with the maxi-
mum cross-correlation function of
λ
c
in OOK is finally derived as
P
e|G unipolar
=
1
x=0
Pr(error |K simultaneous users, receiv er receives bit x)p
x
= Q
1
2
8
9
9
:
w
2
(K 1)
λ
c
i=1
i1
j=0
(i j)
2
q
i
q
j
= Q
;
SIR
2
<
(1.1)
where the factor 1/2isduetoOOKwithequalprobabilityoftransmittingdatabit1s
and 0s, and the terms inside the squ ar e r o ot are collectively identified as the signal-
to-interference power ratio (SIR).
1.8.2 Gaussian Approximation for Bipolar Codes
Using coh erent bipolar mod ulation, ever y user sends out a bipolar codeword cor-
responding to the address (signature) codeword of its intended receiver for a data
bit 1, but transmits the phase-conjugate form of the same codeword for a data bit 0.
The advantage of bipolar codes, such as Walsh codes and maximal-length sequences
[22, 47, 48 ], is the capability of having zero in-phase cross-correlation functions for
preventing mutual interference. However, these kinds of bipolar codes require perfect
code synchron ization to o perate, meaning that all simultan eous users need to trans-
mit in the same orientation—should it be time or wavelength. As a result, a coherent
system is usually a synchronous system if zero mutual interference is desired. On
other hand, if perfect code synchronization is not possible,Goldsequences,afam-
ily of asynchronous bipolar codes with multiple periodic cross-correlation functions,
can be used at the expense of stronger mutual interference.
30 Optical Coding Theory with Prime
The Gaussian-approximated erro r probability o f b ipolar codes is generally given
by [47]
P
e|G bipolar
= Q
=
SIR
>
Comparing to Equation (1.1), the factor 1/2disappearsbecausedatabit0sarenow
conveyed by the phase-conjugate form of bipolar codewords. Th e actual fo rm of
the SIR depends on the bipolar codes in use. For example, the asynchronous Gold
sequences of length N are found to have SIR = 2N/(K 1) [22, 47].
5 10 15 20
10
14
10
12
10
10
10
8
10
6
10
4
10
2
10
0
Error probability P
e |G
Number of simultaneous users K
N = 121
N = 289
N = 127
N = 255
Gold sequences
Prime codes
FIGURE 1.3 Gaussian error probability of (unipolar) 1-D prime codes and(bipolar)Gold
sequences for various code length N.
Figure 1.3 plots the Gaussian-approximated error probabilities of the (unipolar)
1-D prime codes in Section 3.1 and (bipolar) Gold sequences against the number
of simultaneous users K.Inthisexample,theprimecodeshave
λ
c
= 2, length
N = p
2
= {121,289},andweightw = p = {11, 17},whileGoldsequenceshave
N = w = {127,255},wherep is a prime number. Their code lengths are selected
to be similar for a fair comparison. From Section 3.1, the hit probabilities of the
prime codes are given by q
0
=(7p
2
p 2)/(12p
2
), q
1
=(2p
2
+ p +2)/(6p
2
),and
q
2
=(p 2)(p + 1)/(12p
2
).ThevarianceinEquation(1.1)becomes
2
i=1
i1
j=0
(i
j)
2
q
i
q
j
=(5p
2
2p 4)/(12p
2
).Asshowninthefigure,theerrorprobabilitiesof
both code families improve as w or N increases because of higher autocorrelation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.228.35