Xem mẫu
- Digital Communications I:
Modulation and Coding Course
Period 3 - 2007
Catharina Logothetis
Lecture 12
- Last time, we talked about:
How the decoding is performed for
Convolutional codes?
What is a Maximum likelihood decoder?
What are the soft decisions and hard
decisions?
How does the Viterbi algorithm work?
Lecture 12 2
- Trellis of an example ½ Conv. code
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01
1/01
t1 t 2 t 3 t 4
t 5 t 6
Lecture 12 3
- Block diagram of the DCS
Information Rate 1/n
Modulator
source Conv. encoder
m = (m1 , m2 ,..., mi ,...) U = G(m)
144 44 2 3
= (U1 , U 2 , U 3 ,..., U i ,...)
Channel
Input sequence
144 2444 4 3
Codeword sequence
U i = u1i ,...,u ji ,...,u ni
14 2444 3
Branch word ( n coded bits)
Information Rate 1/n
Demodulator
sink Conv. decoder
m = (m1 , m2 ,..., mi ,...)
ˆ ˆ ˆ ˆ Z = ( Z1 , Z 2 , Z 3 ,..., Z i ,...)
144 2444 4 3
received sequence
Zi = z1i ,...,z ji ,...,z ni
{ 14 2444 3
Demodulator outputs n outputs per Branch word
for Branch word i
Lecture 12 4
- Soft and hard decision decoding
In hard decision:
The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides
no other information for the decoder such that
how reliable the decision is.
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
Lecture 12 5
- Soft and hard decision decoding …
ML soft-decisions decoding rule:
Choose the path in the trellis with minimum
Euclidean distance from the received
sequence
ML hard-decisions decoding rule:
Choose the path in the trellis with minimum
Hamming distance from the received
sequence
Lecture 12 6
- The Viterbi algorithm
The Viterbi algorithm performs Maximum
likelihood decoding.
It finds a path through trellis with the largest
metric (maximum correlation or minimum
distance).
At each step in the trellis, it compares the partial
metric of all paths entering each state, and keeps
only the path with the largest metric, called the
survivor, together with its metric.
Lecture 12 7
- Example of hard-decision Viterbi decoding
m = (100)
ˆ
Z = (11 10 11 10 01)
U = (11 10 11 00 11)
ˆ
m = (101 )
U = (11 10 00 10 11 )
0 2 2
1 3
2 0
1 1
1 2
1 0
0
0 3 0 2
1 1
1 2 Partial metric
0 0 3
0
2 Γ(S (ti ), ti )
1
2 2
2 1 3
Branch metric
1
t1 t2 t3 t4 t5 t6
Lecture 12 8
- Example of soft-decision Viterbi decoding
2 2 −2 −2 2 −2 m = (101)
ˆ
Z = (1, , , , ,1, , − 1, ,1)
3 3 3 3 3 3 U = (11 10 00 10 11)
ˆ
m = (101 )
U = (11 10 00 10 11 )
0 -5/3 -5/3
0 -5/3 -1/3 10/3
1/3 1/3 -1/3 14/3
0 1/3 1/3
5/3 5/3 5/3 8/3 1/3
-5/3 -1/3
4/3 1/3 Partial metric
3 2
5/3
13/3 Γ(S (ti ), ti )
-4/3 5/3
-5/3 Branch metric
1/3 5/3 10/3
-5/3
t1 t2 t3 t4 t5 t6
Lecture 12 9
- Today, we are going to talk about:
The properties of Convolutional codes:
Free distance
Transfer function
Systematic Conv. codes
Catastrophic Conv. codes
Error performance
Interleaving
Concatenated codes
Error correction scheme in Compact disc
Lecture 12 10
- Free distance of Convolutional codes
Distance properties:
Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:
Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the
all-zero codeword.
This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and remerge to the all-zero
path.
It is called the minimum free distance or the free distance of
the code, denoted by d free or d f
Lecture 12 11
- Free distance …
The path diverging and remerging to Hamming weight
All-zero path of the branch
all-zero path with minimum weight
df =5
0 0 0 0 0
2 2 2
2 2 2
1 0
1 1
1 1
1 1
t1 t 2 t 3 t 4
t 5 t 6
Lecture 12 12
- Transfer function of Convolutional codes
Transfer function:
Transfer function of generating function is a tool
which provides information about the weight
distribution of the codewords.
The weight distribution specifies weights of different paths
in the trellis (codewords) with their corresponding lengths
and amount of information.
T ( D , L, N ) = ∑ ∑∑ Di Lj N l
i = d f j = K l =1
D, L, N : place holders
i : distance of the path from the all - zero path
j : number of branches that the path takes until it remerges to the all - zero path
l : weight of the information bits corresponding to the path
Lecture 12 13
- Transfer function …
Example of transfer function for the rate ½
Convolutonal code.
1. Redraw the state diagram such that the zero state is
split into two nodes, the starting and ending nodes.
2. Label each branch by the corresponding D i L j N l
LN
a = 00 b = 10 c = 01 e = 00
D 2 LN DL D2L
DLN DL
d =11
DLN
Lecture 12 14
- Transfer function …
Write the state equations ( X a ,..., X e dummy variables)
⎧ X b = D 2 LNX a + LNX c
⎪
⎪ X c = DLX b + DLX d
⎨
⎪ X d = DLNX b + DLNX d
⎪ X = D 2 LX
⎩ e c
Solve T ( D, L, N ) = X e / X a
D 5 L3 N
T ( D , L, N ) = = D 5 L3 N + D 6 L4 N 2 + D 6 L5 N 2 + ....
1 − DL(1 + L) N
One path with weight 5, length 3 and data weight of 1
One path with weight 6, length 4 and data weight of 2
One path with weight 5, length 5 and data weight of 2
Lecture 12 15
- Systematic Convolutional codes
A Conv. Coder at rate k / n is systematic if the
k-input bits appear as part of the n-bits branch
word.
Input Output
Systematic codes in general have smaller free
distance than non-systematic codes.
Lecture 12 16
- Catastrophic Convolutional codes
Catastrophic error propagations in Conv. code:
A finite number of errors in the coded bits cause as
infinite number of errors in the decoded data bits.
A Convolutional code is catastrophic if there is a
closed loop in the state diagram with zero
weight.
Systematic codes are not catastrophic:
At least one branch of output word is generated by
input bits.
Small fraction of non-systematic codes are
catastrophic.
Lecture 12 17
- Catastrophic Conv. …
Example of a catastrophic Conv. code:
Assume all-zero codeword is transmitted.
Three errors happens on the coded bits such that the decoder
takes the wrong path abdd…ddce.
This path has 6 ones, no matter how many times stays in the
loop at node d.
It results in many erroneous decoded data bits.
10
Input Output
a 11 b 10 c 01 e
00 10 01 00
01 d 11
11
00
Lecture 12 18
- Performance bounds for Conv. codes
Error performance of the Conv. codes is
analyzed based on the average bit error
probability (not the average codeword error
probability), because
Codewords with variable sizes due to different size
of input.
For large blocks, codeword error probability may
converges to one bit the bit error probability may
remain constant.
….
Lecture 12 19
- Performance bounds …
Analysis is based on:
Assuming the all-zero codeword is transmitted
Evaluating the probability of an “error event”
(usually using bounds such as union bound).
An “error event” occurs at a time instant in the trellis if a
non-zero path leaves the all-zero path and remerges to it
at a later time.
Lecture 12 20
nguon tai.lieu . vn