Xem mẫu
- Modulation, Demodulation and
Coding Course
Period 3 - 2007
Catharina Logothetis
Lecture 11
- Last time, we talked about:
Another class of linear codes, known as
Convolutional codes.
We studied the structure of the encoder
and different ways for representing it.
Lecture 11 2
- Today, we are going to talk about:
What are the state diagram and trellis
representation of the code?
How the decoding is performed for
Convolutional codes?
What is a Maximum likelihood decoder?
What are the soft decisions and hard
decisions?
How does the Viterbi algorithm work?
Lecture 11 3
- Block diagram of the DCS
Information Rate 1/n
Modulator
source Conv. encoder
m = (m1 , m2 ,..., mi ,...) U = G(m)
144 44 2 3
= (U1 , U 2 , U 3 ,..., U i ,...)
Channel
Input sequence
144 2444 4 3
Codeword sequence
U i = u1i ,...,u ji ,...,u ni
14 2444 3
Branch word ( n coded bits)
Information Rate 1/n
Demodulator
sink Conv. decoder
m = (m1 , m2 ,..., mi ,...)
ˆ ˆ ˆ ˆ Z = ( Z1 , Z 2 , Z 3 ,..., Z i ,...)
144 2444 4 3
received sequence
Zi = z1i ,...,z ji ,...,z ni
{ 14 2444 3
Demodulator outputs n outputs per Branch word
for Branch word i
Lecture 11 4
- State diagram
A finite-state machine only encounters a
finite number of states.
State of a machine: the smallest amount
of information that, together with a
current input to the machine, can predict
the output of the machine.
In a Convolutional encoder, the state is
represented by the content of the
memory.
Hence, there are 2 states.
K −1
Lecture 11 5
- State diagram – cont’d
A state diagram is a way to represent
the encoder.
A state diagram contains all the states
and all possible transitions between
them.
Only two transitions initiating from a
state
Only two transitions ending up in a state
Lecture 11 6
- State diagram – cont’d
Current input Next output
0/00 Output state state
(Branch word)
S0
Input
S0 0 S0 00
1/11 00 0/11
00 1 S2 11
1/00 S1 0 S0 11
S2 S1
10 01 01 1 S2 00
0/10
S2 0 S1 10
1/01 S3 0/01 10 1 S3 01
0 01
11
S3 S1
1/10 11 1 S3 10
Lecture 11 7
- Trellis – cont’d
Trellis diagram is an extension of the state
diagram that shows the passage of time.
Example of a section of trellis for the rate ½ code
State
S 0 = 00 0/00
1/11
S 2 = 10 0/11
1/00
S1 = 01 1/01
0/10
0/01
S3 = 11 1/10
ti ti +1 Time
Lecture 11 8
- Trellis –cont’d
A trellis diagram for the example code
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11 1/11 1/11
0/11 0/11 0/11 0/11 0/11
1/00 1/00 1/00 1/00 1/00
0/10 0/10 0/10 0/10 0/10
1/01 1/01 1/01 1/01 1/01
0/01 0/01 0/01 0/01 0/01
t1 t 2 t 3 t 4
t 5 t 6
Lecture 11 9
- Trellis – cont’d
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01
t1 t 2 t 3 t 4
t 5 t 6
Lecture 11 10
- Optimum decoding
If the input sequence messages are equally likely, the
optimum decoder which minimizes the probability of
error is the Maximum likelihood decoder.
ML decoder, selects a codeword among all the
possible codewords which maximizes the likelihood
(m′ )
function p (Z | U ) where Z is the received
sequence and U (m′) is one of the possible codewords:
2 L codewords
to search!!!
ML decoding rule:
Choose U ( m′) if p(Z | U ( m′) ) = max(m) p(Z | U ( m ) )
over all U
Lecture 11 11
- ML decoding for memory-less channels
Due to the independent channel statistics for
memoryless channels, the likelihood function becomes
∞ ∞ n
p(Z | U (m)
) = p z1 , z2 ,..., zi ,... ( Z1 , Z 2 ,..., Z i ,... | U ( m)
) = ∏ p( Z i | U i
(m)
) =∏∏ p ( z ji | u (jim ) )
i =1 i =1 j =1
and equivalently, the log-likelihood function becomes
∞ ∞ n
γ U (m) = log p(Z | U (m)
) = ∑ log p ( Z i | U i
(m)
) = ∑∑ log p ( z ji | u (jim ) )
i =1 i =1 j =1
Path metric Branch metric Bit metric
The path metric up to time index "i", is called the partial path
metric.
ML decoding rule:
Choose the path with maximum metric among
all the paths in the trellis.
This path is the “closest” path to the transmitted sequence.
Lecture 11 12
- Binary symmetric channels (BSC)
1 1
p
Modulator Demodulator p = p(1 | 0) = p(0 | 1)
input output
p 1 − p = p(1 | 1) = p (0 | 0)
0 1-p 0
If d m = d (Z, U ) is the Hamming distance between Z
(m )
and U, then
p (Z | U ( m ) ) = p d m (1 − p ) Ln − d m Size of coded sequence
⎛1− p ⎞
γ U (m) = − d m log⎜
⎜ p ⎟ + Ln log(1 − p )
⎟
⎝ ⎠
ML decoding rule:
Choose the path with minimum Hamming distance
from the received sequence.
Lecture 11 13
- AWGN channels
For BPSK modulation the transmitted sequence
corresponding to the codeword U (m ) is denoted by
where S ( m ) = ( S1( m ) , S 2( m ) ,..., Si( m ) ,...) and Si ( m ) = ( s1(im ) ,..., s (jim ) ,..., snim ) )
(
and sij = ± Ec .
The log-likelihood function becomes
∞ n
γ U (m) = ∑∑ z ji s (jim ) =< Z, S ( m ) > Inner product or correlation
between Z and S
i =1 j =1
Maximizing the correlation is equivalent to minimizing the
Euclidean distance.
ML decoding rule:
Choose the path which with minimum Euclidean distance
to the received sequence.
Lecture 11 14
- Soft and hard decisions
In hard decision:
The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides no
other information for the decoder such that how
reliable the decision is.
Hence, its output is only zero or one (the output is
quantized only to two level) which are called “hard-
bits”.
Decoding based on hard-bits is called the
“hard-decision decoding”.
Lecture 11 15
- Soft and hard decision-cont’d
In Soft decision:
The demodulator provides the decoder with some
side information together with the decision.
The side information provides the decoder with a
measure of confidence for the decision.
The demodulator outputs which are called soft-
bits, are quantized to more than two levels.
Decoding based on soft-bits, is called the
“soft-decision decoding”.
On AWGN channels, 2 dB and on fading
channels 6 dB gain are obtained by using
soft-decoding over hard-decoding.
Lecture 11 16
- The Viterbi algorithm
The Viterbi algorithm performs Maximum likelihood
decoding.
It find a path through trellis with the largest metric
(maximum correlation or minimum distance).
It processes the demodulator outputs in an iterative
manner.
At each step in the trellis, it compares the metric of all
paths entering each state, and keeps only the path with
the smallest metric, called the survivor, together with its
metric.
It proceeds in the trellis by eliminating the least likely
paths.
K −1
It reduces the decoding complexity to L 2 !
Lecture 11 17
- The Viterbi algorithm - cont’d
Viterbi algorithm:
A. Do the following set up:
For a data block of L bits, form the trellis. The trellis
has L+K-1 sections or levels and starts at time t1 and
ends up at time t L + K .
Label all the branches in the trellis with their
corresponding branch metric.
For each state in the trellis at the time ti which is
denoted by S (ti ) ∈ {0,1,...,2 K −1} , define a parameter Γ(S (ti ), ti )
B. Then, do the following:
Lecture 11 18
- The Viterbi algorithm - cont’d
1. Set Γ(0, t1 ) = 0 and i = 2.
2. At time ti , compute the partial path metrics for
all the paths entering each state.
3. Set Γ(S (ti ), ti ) equal to the best partial path metric
entering each state at time ti .
Keep the survivor path and delete the dead paths
from the trellis.
1. If i < L + K , increase i by 1 and return to step 2.
A. Start at state zero at time t L + K . Follow the
surviving branches backwards through the
trellis. The path thus defined is unique and
correspond to the ML codeword.
Lecture 11 19
- Example of Hard decision Viterbi
decoding
m = (101)
U = (11 10 00 10 11)
Z = (11 10 11 10 01)
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01
t1 t2 t3 t4 t5 t6
Lecture 11 20
nguon tai.lieu . vn