Fundamental Materials and Tools 11
zero vector is always in the subspace. So, a subspace exists even if it has only one
vector—the zero vector.
Theorem 1.4
If all vectors in a set V
$
are formed b y the linear combinations of a subset o f linearly
independent vectors in a vector space V over a field F, V
$
is called a subspace of V
and this subset of linearly independent vectors forms a basisofV
$
.Additiveidentity
elements and inverses exist and the com m u tative, associative, and distributive laws
also hold in V
$
.
Proof Let the subset of linearly independent vectors in V be {v
1
,v
2
,...,v
i
,...,v
m
}.
By definition, V also contains vectors created by the linear combinations of these
vectors. So, all vectors in V
$
are found in V .Also,letv
a
= a
1
v
1
+ a
2
v
2
+ ···+ a
i
v
i
+
···+ a
m
v
m
and v
b
= b
1
v
1
+ b
2
v
2
+ ···+ b
i
v
i
+ ···+ b
m
v
m
be vectors in V
$
for some
scalars {a
1
,a
2
,...,a
i
,...,a
m
} and {b
1
,b
2
,...,b
i
,...,b
m
} in F,andc be a scalar in
F.Theirsumandproduct
v
a
+ v
b
=(a
1
+ b
1
)v
1
+(a
2
+ b
2
)v
2
+ ···+(a
i
+ b
i
)v
i
+ ···+(a
m
+ b
m
)v
m
cv
a
= ca
1
v
1
+ ca
2
v
2
+ ···+ ca
i
v
i
+ ···+ ca
m
v
m
belong to V
$
because a
i
+ b
i
and ca
i
are also in F for all i [1,m] due to closure of
F under addition and multiplication. So, V
$
is a subspace of V.
By definition, a vector subspace is also a vector space. So, this subset of linearly
independent vectors spans and forms a basis of V
$
,andthepropertiesandlawsofV
are inher ited. The existence of add itive identity elemen ts and inverses can be shown
by applying c = 0andc = 1suchthat
0v
a
= 0v
1
+ 0v
2
+ ···+ 0 v
i
+ ···+ 0 v
m
= 0
v
a
=(a
1
)v
1
+(a
2
)v
2
+ ···+(a
i
)v
i
+ ···+(a
m
)v
m
result, where “0” is the (scalar) zero in F and a
i
is the additive inverse of a
i
in F
for all i [1, m].
Using the unit vector v
1
=(0,1) of V
2
over GF(2) as an example, linear combina-
tions of the form a
1
v
1
with a scalar a
1
in GF(2) result in two vectors (0,0) and (0,1),
which constitute a subspace V
$
2
.So,theunitvectorv
1
forms a basis of V
$
2
.
1.3 MATRIX THEORY
Matrix theory is another impo r tant con cep t and famous for systematically and ef-
ficiently solving a system of simultaneous linear equations in compact form [4, 5].
12 Optical Coding Theory with Prime
Matrix theory over a finite field under modu lo addition and multiplication is of par-
ticular interest in optical coding theory because two-dimensional optical codes can
be represented as matrices over a finite field.
1.3.1 Basic Definitions
An m ×n matrix A over a field F is an array of mn elements, arranged in m rows and
n colu m n s as
A =
a
11
a
12
··· a
1 j
··· a
1n
a
21
a
22
··· a
2 j
··· a
2n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
i1
a
i2
··· a
ij
··· a
in
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
a
m2
··· a
mj
··· a
mn
=[a
ij
]
where a
ij
is an element in F and located in the i th row and j th column of A .Ifm
equals n,suchamatrixiscalledasquare matrix.
A1×n matrix has only one row and is also called a row vector.
An m ×1matrixhasonlyonecolumnandisalsocalledacolumn vector.
A zer o matrix 0 has all its mn elements equal to the zero element in F.
The main diagona l of a matrix contains the elements, a
ii
,extendingdiagonally
from the top-left corner to the bottom-right corner, with equal column and row num-
bers.
A triangular matrix is an n ×n square matrix with all its elements, either below
or above the main diagonal, equal to the zero element in F.Theupperandlower
triangular matrices are denoted as
U =
a
11
a
12
a
13
··· a
1n
0 a
22
a
23
··· a
2n
00a
33
··· a
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
000··· a
nn
L =
a
11
00··· 0
a
21
a
22
0 ··· 0
a
31
a
32
a
33
··· 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
n1
a
n2
a
n3
··· a
nn
respectively.
A diagonal matrix is an n ×n square matrix with all its elements, except those in
the main diagonal, equal to the zero element in F such that a
ij
= 0foralli &= j .
An identity matrix I is an n ×n diagonal matrix with all its elements in the main
diagonal equal to the unity element in F.Forexample,
'
10
01
(
100
010
001
Fundamental Materials and Tools 13
The transpose of an m ×n matrix A =[a
ij
] exchanges the rows and columns of A
to generate an n ×m matrix A
t
=[a
ij
]
t
=[a
ji
].Forexample,
A =
147
258
369
A
t
=
123
456
789
The transpose of an upper triangular matrix is a lower triangular matrix.
The transpose of a lower triangular matrix is an upper triangular matrix.
Amatrixcanbepartitionedintoanarrayofsmallerdisjointmatrices, called sub-
matrices.Thetotalnumbersofrowsandcolumnsofallsubmatricesmustaddupto
those of the whole matrix. For example, matrix A can be partitioned into
A =
15 9
2610
3711
4812
=
'
A
1
A
2
(
=
'
A
11
A
12
A
21
A
22
(
and the submatrices can be expressed in different forms, suchas
A
1
=
'
15 9
2610
(
A
2
=
'
3711
4812
(
A
11
=
)
1
*
A
12
=
)
59
*
A
21
=
2
3
4
A
22
=
610
711
812
Two same-dimension matrices are said to be equal only if theirrespectiveele-
ments are all identical.
1.3.2 Basic Operations and Properties
Let A an d B be m×n matrices over a field F and c be an element in F,theirsumand
product
A + B =
a
11
+ b
11
a
12
+ b
12
··· a
1n
+ b
1n
a
21
+ b
21
a
22
+ b
22
··· a
2n
+ b
2n
.
.
.
.
.
.
.
.
.
.
.
.
a
m1
+ b
m1
a
m2
+ b
m2
··· a
mn
+ b
mn
=[a
ij
+ b
ij
]
cA =
ca
11
ca
12
··· ca
1n
ca
21
ca
22
··· ca
2n
.
.
.
.
.
.
.
.
.
.
.
.
ca
m1
ca
m2
··· ca
mn
=[ca
ij
]
14 Optical Coding Theory with Prime
are m × n matrices over F,wherea
ij
, b
ij
, a
ij
+ b
ij
,andca
ij
are elements in F.
Addition of matrices with different dimensions is not defined.
Addition and scalar multiplication of matrices are both associative and commuta-
tive due to the same properties in F.
Matrix m u ltiplication, say D = AB,isdenedonlyifthenumberofcolumnsin
A is equal to the number of rows in B.Theproductisanm ×n matrix D =[d
ij
] over
afieldF if A =[a
il
] is an m ×k matrix and B =[b
lj
] is a k ×n matrix, both over F.
The (i, j)th element in D is the sum of the products of the elements in the ith row in
A and the respective elements in the j th column in B such that
d
ij
=
k
l=1
a
il
b
lj
where d
ij
, a
il
,andb
lj
are elements in F for i [1,m], l [1,k],and j [1,n].For
example,
A =
'
123
456
(
B =
12
34
56
AB =
'
22 28
49 64
(
It is easy to verify that matrix multiplication is distributive and associative. For
example,
A(B + C)=AB + AC
(B + C)A = BA + CA
(AB)C = A(BC)
for matrices A, B,andC over a field F.However,matrixmultiplicationisgenerally
not commutative. For example, it is usually found that AB &= BA.
The produ ct of two upp er triangular matrices produces an upper triangular matrix.
The product of two lower triangular matrices pro d uces a lowertriangularmatrix.
The inverse A
1
of an n ×n square matrix A over a field F is an n ×n square
matrix over F,satisfying
A
1
A = AA
1
= I
where I is the n ×n identity matrix over F.
If its inverse exists, a matrix is called nonsingular.Otherwise,itiscalledsingular.
The inverse of a triangular m atrix is also a triangular matrix.
Theorem 1.5
If D = AB and the inverses of A and B both exist over a field F ,then
D
1
= B
1
A
1
Fundamental Materials and Tools 15
Proof Because A
1
and B
1
both exist, it is easy to show that
(B
1
A
1
)D = B
1
A
1
AB = B
1
IB = B
1
B = I
D(B
1
A
1
)=ABB
1
A
1
= AIA
1
= AA
1
= I
So, the inverse of D is B
1
A
1
,bydenition.
1.3.3 Determinant
The determinant of an n ×n square matrix A =[a
ij
] is defined as a sum of signed
n-fold product terms and denoted as
|A| =
(1)
q
a
1p
1
a
2p
2
···a
ip
i
···a
np
n
Each product term contains one p ermu tation of one element from each row and col-
umn of A.Becauseeachpermutationisdeterminedbyadistinctsetofn subscripts
{p
1
, p
2
,..., p
i
,..., p
n
} and p
i
[1,n],thereareintotaln!suchpermutationsand,in
turn, n!productterms.Thesummationisperformedoverallthesen!productterms
with the sign (1)
q
determined by whether the number of elements q that need to
be reordered in order to obtain one distinct permutation is even or odd.
As a recursive method of calculating the deter minant of A =[a
ij
] in terms of the
sum of the n determinants of (n 1) ×(n 1) submatrices, the Laplace expansion
formula is given by [4, 5]
|A| =
n
j=1
(1)
i+ j
a
ij
|M
ij
|
for a fixed i [1,n]. M
ij
denotes the (n 1 ) ×(n 1) submatrix after the row and
column containing a
ij
in A have been removed. While |M
ij
| is called the minor of
a
ij
, (1)
i+ j
|M
ij
| is called the cofactor of a
ij
.
For example, for n = 3andi = 2, the determinant of A can be calculated by
|A| =
+
+
+
+
+
+
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
+
+
+
+
+
+
=(1)
2+1
a
21
+
+
+
+
a
12
a
13
a
32
a
33
+
+
+
+
+(1)
2+2
a
22
+
+
+
+
a
11
a
13
a
31
a
33
+
+
+
+
+(1)
2+3
a
23
+
+
+
+
a
11
a
12
a
31
a
32
+
+
+
+
= a
21
)
(1)
1+1
a
12
a
33
+(1)
1+2
a
13
a
32
*
+a
22
)
(1)
1+1
a
11
a
33
+(1)
1+2
a
13
a
31
*
a
23
)
(1)
1+1
a
11
a
32
+(1)
1+2
a
12
a
31
*
= a
21
(a
12
a
33
a
13
a
32
)+a
22
(a
11
a
33
a
13
a
31
) a
23
(a
11
a
32
a
12
a
31
)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.190.156.80