• Nie Znaleziono Wyników

Trellis-based source and channel coding

N/A
N/A
Protected

Academic year: 2021

Share "Trellis-based source and channel coding"

Copied!
160
0
0

Pełen tekst

(1)
(2)
(3)

Channel Coding

Proefschrift

terverkrijging van degraad van doctor

aan de TechnischeUniversiteitDelft,

op gezagvan deRector Magni cus Prof.ir. K.F. Wakker,

inhet openbaar teverdedigen ten overstaan van een commissie,

doorhet College van Dekanen aangewezen,

op dinsdag 29maart 1994 te 16.00uur

door

Renatus Josephus van der Vleuten,

elektrotechnisch ingenieur,

(4)

Promotiecommissie:

RectorMagni cus

Prof.dr.ir. J. Biemond(promotor)

Dr.ir. J.H. Weber(toegevoegdpromotor)

Prof.dr. J.C.Arnbak

Prof.dr.ir. E.Backer

Prof.dr.ir. J.P.M.Schalkwijk

Prof.dr.ir. K.A. SchouhamerImmink

Prof.dr.ir. A.J.Vinck

CIP-DATA KONINKLIJKE BIBLIOTHEEK, DENHAAG

Vleuten,Renatus Josephusvander

Trellis-basedsourceandchannelcoding/ RenatusJosephus

vanderVleuten. -Delft: TechnischeUniversiteitDelft,

FaculteitderElektrotechniek. -Ill.

ThesisTechnischeUniversiteitDelft. -Withref. -With

summaryin Dutch.

ISBN90-5326-013-7

Subjectheadings: trellis-codedquantization/

trellis-coded modulation.

Copyright c

1994byRenatusJosephusvanderVleuten

All rights reserved. Nopart of this thesismay be reproduced or transmittedin

anyformorbyanymeans,electronic,mechanical,photocopying,anyinformation

storage and retrieval system, or otherwise, without written permission from the

(5)

Summary xi

I Trellis-Based Source Coding 1

1 Introduction 3

1.1 Multidimensional Quantization : ::: : : ::: :: :: 3

1.2 Codebook design :: :: : :: :: : :: : : : : : :: : : 4

1.3 Trellis Waveform Coding : : : :: : : : :: ::: : : :: 6

2 New Constructions of Trellis-Coded Quantizers 9

2.1 Introduction ::: : : ::: : : :: : :: : : :: : : : : : 9

2.2 Trellis Waveform Coding : : : :: : : : :: ::: : : :: 11

2.2.1 Codebook Design Methods : :: :: ::: :: :: 11

(6)

2.2.3 New Constructions : : :: : : : :: ::: : : :: 18

2.3 Trellis-Coded Quantization : : :: ::: :: ::: : : :: 19

2.4 Trellis-Coded Vector Quantization : : : : : ::: :: :: 21

2.5 Conclusions ::: : : ::: : : :: : :: : : :: : : : : : 22 3 Performance Evaluation 25 3.1 Introduction ::: : : ::: : : :: : :: : : :: : : : : : 25 3.2 Preliminaries : : : : : :: : : :: :: : :: : :: :: :: 26 3.2.1 Implementation Complexity :: :: ::: : : :: 26 3.2.2 Training Sequences :: :: :: : : : : : : :: : : 26 3.2.3 Con dence Intervals : :: ::: :: ::: : : :: 27 3.2.4 Codebook Optimization : ::: :: ::: : : :: 28 3.3 TWC and TCQ Experiments :: ::: :: ::: : : :: 30 3.4 TCVQExperiments: :: : :: :: :: : : : : : : :: : : 34 3.5 Gauss-Markov Sources : : :: : : : : : :: ::: : : :: 39 3.6 The M-Algorithm : : : :: : : :: : : : :: ::: : : :: 43 3.7 Discussion : ::: : : ::: : : :: : :: : : :: : : : : : 44

(7)

4 Rate Distortion Theory for Trellis Waveform Coding 49

4.1 Introduction ::: : : ::: : : :: : :: : : :: : : : : : 49

4.2 Rate Distortion Theory for Discrete Memoryless Sources 50

4.3 Discrete Alphabet Rate Distortion Theory :: : :: :: 52

4.4 Application toTrellis Waveform Coding : :: : :: :: 55

4.5 Conclusions ::: : : ::: : : :: : :: : : :: : : : : : 59

5 DCT Coding of Images Using TCQs 61

5.1 Introduction ::: : : ::: : : :: : :: : : :: : : : : : 61

5.2 The Discrete Cosine Transform : ::: :: ::: :: :: 62

5.3 Quantization : : : : : :: : : :: :: : :: : :: :: :: 63

5.4 Channel Error Protection :: :: :: : : : : : : :: : : 65

5.5 Image Coding Experiments: : :: ::: :: ::: : : :: 69

5.6 Conclusions ::: : : ::: : : :: : :: : : :: : : : : : 77

6 Discussion 79

(8)

II Trellis-Based Channel Coding 83

7 Introduction 85

7.1 Digital Communication :: :: :: : :: : : : : : :: : : 85

7.2 The Additive White Gaussian Noise Channel : : :: :: 87

7.3 Modulation for Bandwidth-Limited Channels : : :: :: 88

7.4 Trellis-Coded Modulation :: :: :: : : : : : : :: : : 90

8 TCM with Optimized Signal Constellations 95

8.1 Introduction ::: : : ::: : : :: : :: : : :: : : : : : 95

8.2 1-Dimensional Signal Constellations :: : : ::: :: :: 96

8.2.1 Optimization for R =1 :: ::: :: ::: : : :: 96

8.2.2 Extension toR>1 : : :: ::: :: ::: : : :: 99

8.3 2-Dimensional Signal Constellations :: : : ::: :: :: 106

8.3.1 PSK Constellations : : :: ::: :: ::: : : :: 106

8.3.2 QAM Constellations : : :: : :: : : : :: :: : 110

(9)

A Autocorrelation of the Fake Process 117

B Proof of White Spectrum for Construction B 119

C Proof of White Spectrum for Construction C 123

D Codebook Initializations for the TWCs and TCQs 125

Bibliography 127

Samenvatting 137

Acknowledgements 141

Curriculum Vitae 143

(10)
(11)

This thesis concerns the ecienttransmission of digital data,such as

digitized sounds orimages, froma sourceto itsdestination. To make

the best use of the limited capacity of the source-destination

chan-nel, a source coder is used to delete the less signi cant information.

To correct the occurring transmission errors, achannel coder is used.

Ecient techniques for source and channelcoding, based on trellises,

are respectively investigated in Part I and Part II of this thesis.

Part I: Trellis-Based Source Coding

There are two mainstreams in source coding: lossless source coding

and\lossy"sourcecoding,ordatareduction. Thelatterformofsource

coding is considered here.

The art of data reduction is commonly referred to as quantization.

The advances of digital technology have led toquantizers which

pro-cessseveralsourcesamplesatonceandareknownasvectorquantizers.

In general, the complexity of vector quantization increases

exponen-tially with increasing vector dimension, but there are some

reduced-complexityvariationsofvectorquantization, oneofwhichisknownas

trellis waveform coding. TWC (used to denote both trellis waveform

coding and a trellis waveform coder) can be improved by atechnique

(12)

PartI ofthisthesisreports on newlydesigned TCQs,whichare based

on a fake process approach. Using this approach, one tries to imitate

the original source by creating a fake process, which is generated by

feeding a random bit stream to the decoder. To evaluate the

perfor-mances of the new quantizers, experiments have been performed for

memorylessLaplacian, Gaussian, and uniform sources. For the

mem-oryless Gaussian and Laplacian sources,the proposed TCQs improve

upon all previously published results.

The discipline of information theory that treats quantization is

called rate distortion theory. Rate distortion theory for memoryless

continuous-amplitude sources with discrete representations is called

discrete-alphabet rate distortion theory. Computation of the

discrete-alphabetratedistortionfunctionnotonlyprovidestheasymptotically

achievable coding performance, but also the asymptotically optimal

representation symbols. Experiments have shown that TWCs using

those representation symbols can perform close to optimized TWCs.

This isalso thecase for TWCsusingthe maximum-entropyquantizer

representation symbols. At low complexities, the maximum-entropy

quantizer based TWCs outperform the rate distortion function based

TWCs.

The performance of the new TCQs has also been investigated for a

discrete cosine transform (DCT) image coding scheme. The

perfor-manceof the codingschemeusing TCQshas been compared withthe

performance whenusing Lloyd-Maxquantizers (LMQs). Itwasfound

that theperceivedimagequality isconsiderablyimprovedwhen using

TCQs compared to LMQs: the occurring edges are better preserved

and many fewer blocking e ects and much less background noise are

visible.

The main practical advantages of TCQ are that it does not need

en-tropy coding and thatit has an asymmetric complexity, concentrated

at the encoder. Further, a distinct practical advantage of TCQ over

(13)

Part II: Trellis-Based Channel Coding

A distinction can be made between power-limited and

bandwidth-limited channels; the latter are considered here. Important practical

examplesofsuchchannelsarethetelephonechannelandthemagnetic

recording channel.

Since the channels cannot directly handle a bit stream, it has to be

converted, by a modulator, into a suitablewaveform. The traditional

forms of modulation for bandwidth-limited channels are pulse

ampli-tude modulation (PAM), phase shift keying (PSK), and quadrature

amplitude modulation (QAM). Bycombiningthe channel codingand

modulationfunctionsaperformance gainoverthetraditionaluncoded

modulation can be achieved. An e ective method for designing

com-bined coding and modulation codes is known as trellis-coded

modula-tion (TCM).TCM uses the symmetriesof binaryconvolutional codes

tomapthechannelsymbolsontothetrellis. Traditionally,thechannel

symbols are selected from the PAM, PSK, or QAM signal

constella-tions that are also used for uncoded modulation.

In Part II of this thesis, the implementation of a computer search

to jointly optimize the convolutional code, the mapping, and the

sig-nal constellation in a TCM scheme is discussed. As a result of the

search, TCM schemes that outperform the traditional TCM schemes

havebeen found. The performance gainsoverthe traditional schemes

are obtained without expanding the signal constellation and without

(14)
(15)
(16)
(17)

Introduction

1.1 Multidimensional Quantization

There are two main streams insource coding: lossless source coding,

or data compaction, and \lossy" source coding, or data reduction or

compression. The latter form of source coding is the topic of Part I

of this thesis. The art of data reduction is commonly referred to as

quantization.

In the early days of digital signal processing, the only task of the

quantizer was to perform an analog-to-digital conversion, scaling the

inputsamplesand rounding themtothenearestintegernumber. The

advances in digital technology have led to more complex

quantiz-ers. Although a high-resolution analog-to-digital conversion is still

the rst step of adigital signal processing system, it isoften followed

atsomepointinthesystembyasecondaryquantizationstep[Gers92].

Whereas the analog-to-digital converter is simply a means to obtain

a signal suitable for digital processing, the purpose of the secondary

(18)

the digital signal, at the cost of an increased distortion in its

repre-sentation of the original signal.

The amount of information necessary to describe a signal is called

the entropy and is expressed in bits per symbol (or sample). The

distortion isusually measured as the mean squared error (MSE). The

best quantizer is the one that results in the lowest distortion at a

certain xedentropy,or, alternatively,itis theone that resultsinthe

lowestentropyatacertain xeddistortion. Theoreticalboundstothe

performanceofanyquantizeraregivenbyratedistortiontheory(which

is discussedin Chapter 4).

A quantizerthat operateson asingle signalsampleatatime iscalled

a scalar quantizer; when it quantizes several samples at once, it is

known as a multidimensional or vector quantizer. VQ (which is here

used to denote both Vector Quantization and a Vector Quantizer)in

itsbasicformisastraightforwardextensionofthe1-dimensionalscalar

quantization. Speci cally,anN-dimensionalinputvectorxismapped

ontoanN-dimensionaloutputvectory,whereyistakenfroma nite

set, called the codebook.

Given a source vector and a codebook, the aim of the VQ is to

pro-duce the best representation vector, de ned as that vector from the

codebook which has the minimal distortion from the source vector.

Thus, for a given source vector, the VQ computes the distortion for

each codebook vector, selects the one having minimal distortion, and

transmitsthecorrespondingcodebookindex. Thisindexisusedatthe

decoder to select the corresponding vector from its codebook (which

is identical tothe codebook of the encoder).

1.2 Codebook design

(19)

ex-optimal quantizer minimizes theexpecteddistortion fora given

code-book dimension and cardinality. Basically, there are two approaches

towards the design of the codebook:

1. Stochastic codebook, the elements of which are chosen at

ran-dom from the set of input vectors. The input vectors can also

be approximated by vectors of random variables having

appro-priately adapted variances.

2. Optimized codebook,whichis obtained asthe result ofan

opti-mizationprocedurewhichadaptsthe codebookelementstobest

matcha training set of inputvectors.

The asymptotic performance (for large N) of the stochastic and

op-timizedcodebook VQsapproaches the rate distortion bound [Berg71,

Vite79].

The conditions forthe minimum-distortion quantizer were derivedby

Lloyd and Max [Lloy82 , Max60]. The rst (obvious) condition has

already been discussed: the quantizer should select the output

vec-tor that results in the minimal distortion. The second condition is

thateachoutputvectorbethe centroidofthoseinputvectorsthatare

mapped onto it. In practice, the statistics of the input vectors may

not beknown. In that case, the codebook can be designed using

rep-resentative training vectors. The VQ performance is then quanti ed

by itsperformance for vectors outside the training set.

A codebook design algorithm that has successfully been applied was

developed by Linde, Buzo, and Gray [Lind80]. Their extension of

Lloyd's algorithm is known as the LBG or K-means or generalized

Lloyd algorithm and has been listed in Figure 1.1. The drawbacks

of the algorithm are that it converges slowly and that it converges

(20)

quan-Step 1. Choosean initialcodebook.

Step 2. Encodethe training setusing the present codebook.

Step 3. Replace eachoutput vector by the centroid of those

in-put vectors that were mapped onto it (in Step 2). If

the distortion is suciently small then quit, else go to

Step 2.

Figure 1.1: The LBG codebook optimizationalgorithm.

timestodi erentinitialcodebooksandselectingthebestresult.

How-ever,thecomputationalburdenofthissearchproceduresoon becomes

prohibitive.

Finally, it should be remarkedthat quantizers designed for minimum

distortion, for a given codebook cardinality, in general are not

opti-mal in arate distortion sense, i.e. the same distortion, but ata lower

rate,may beobtainedbyanotherquantizer,havingalargercodebook

cardinality. Inthatcase,entropycoding(seee.g.[Gers92])ofthe

code-bookindexesis applied,i.e. variable-length codewordsare assignedto

the indexes. One of the approaches to nding such rate distortion

optimal quantizers is known as entropy-constrained vector

quantiza-tion (ECVQ) [Chou89, Gers92]. The design algorithm is similar to

theLBG algorithmandhasbeenlistedinFigure1.2. Amoredetailed

listing of this algorithm can befound in[Chou89] and [Gers92].

1.3 Trellis Waveform Coding

The complexity of VQ increases exponentially with increasing vector

dimensionN,buttherearesomevariationsofVQwhichtrytoreduce

(21)

Step 1. Choose an initial codebook and assign initial

code-words (variableor xedlength) tothe indexes.

Step 2. Encodethe training setusing the presentcodebook,

us-ing as the distortion measure the weighted sum of the

MSE between the input vector and a code vector and

the length of the codeword assigned to that codebook

vector.

Step 3. Replace eachoutput vector by the centroid of those

in-putvectorsthatweremappedontoit(inStep2). Assign

new variable-length codewords tothecodebook vectors,

basedontheirfrequencyofselectioninStep2. Ifthe

dis-tortion issuciently small then quit, else goto Step 2.

Figure 1.2: The ECVQ codebook optimization algorithm.

Oneofthesevariationsisknown astrellis waveformcoding. Intheory,

trelliswaveformcodersuseanin nite-dimensionalcodebook(N =1)

havingaspecialstructurethatallowsforreduced-complexity(relative

to general VQ) encoding and decoding procedures. In practice, the

highestoccurring dimension is nite sincethereis apractical limiton

the implementation complexity.

The design of trellis waveform coders is the topic of Chapter 2. The

performances of the new quantizers are then evaluated in Chapter 3

for sources with well-behaved distributions, in Chapter 4 inthe light

of the optimal performance theoretically achievable, as given by rate

distortion theory, and in Chapter 5 in an image coding application.

Chapter 6discusses trelliswaveform codingfrom atheoretical aswell

(22)
(23)

New Constructions of

Trellis-Coded Quantizers

2.1 Introduction

TWC(used todenotebothtrellis waveform codingand atrellis

wave-form coder) is a proven technique for source coding, with a long

his-tory[Vite79,Gers92]. Theencoderconsistsofacodebook anda nite

state machine, the state transitions of which specify the codebook

symbol which is used to represent the source symbol. All possible

statesequencesofthe nitestatemachinecanberepresentedaspaths

through a trellis. An exampleis depicted in Figure 2.1.

In Figure 2.1(a), the nodes of the trellis represent the state of the

machine and the branch values represent the representation symbol

of the quantizer. In this case, each source sample is represented by a

single bit (which completely speci es the path). For each branch, a

distortion functionisde ned, equal tothe squared di erencebetween

the source symbol and the representation symbol for that

(24)

@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ yy y yyy y y yyy 0.81 8.41 0.09 5.29 0.49 1.69 3.61 0.01 8.41 0.81 0.36 6.76 0.16 2.56 1.44 0.64 4.84 0.04 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ yy y yyy y y yyy 0.5 -1.5 0.5 -1.5 1.5 -0.5 0.5 -1.5 1.5 -0.5 0.5 -1.5 1.5 -0.5 0.5 -1.5 1.5 -0.5 (b) (a)

Figure 2.1: Trellis diagram for a 2-state 1-bit-per-sample trellis

waveformcoder: (a) theTWC, and(b)thedistortion trellisforthe

(25)

1.4 0.8 -1.4 1.1 -0.7

Table2.1: Example source symbols.

symbols listed in Table 2.1. For each path through the trellis, the

distortion is the sum of the distortions associated with the branches

ofthat path. Theencoder investigatesthe pathsthroughthe trellisin

orderto ndthepaththatminimizesthetotaldistortion. Theoptimal

algorithm forthis search isthe Viterbi algorithm [Vite71,Forn73 ]. In

the rathertrivial example,the bestpath|therepresentation symbols

of whichhavebeenindicated inbold typeinFigure 2.1(a)|results in

a total distortion of 1:71.

Given the trellis structure, the question is how to design the trellis

codebook, i.e.howtochoosethe branchrepresentationsymbols. This

is the topic of Section 2.2. Section 2.3 and Section 2.4, respectively,

treattwospecialcasesofTWC,namelytrellis-codedquantizationand

trellis-coded vector quantization. Section 2.5 concludes the chapter.

2.2 Trellis Waveform Coding

2.2.1 Codebook Design Methods

Traditionally, therehave been twomethods fordesigning trellis

code-books (they are the TWC equivalents of the vector-quantizer design

methods of Section 1.2). The rst, based on the asymptotic

optimal-ity proof [Vite79], stochastically populates the trellis with randomly

chosen samplesfromthe sourcedistribution. Althoughingeneralthis

method is very complex, Pearlman et al. have shown that it can be

considerably simpli ed at the cost of a relatively small increase in

(26)

symbolsateachstep), whichare consideredinthis thesis,can achieve

performances close tothose of time-varying TWCs.

The second codebook design method optimizes a given initial

code-book;analgorithmisdescribedin[Stew82 ]. Thealgorithmisbasedon

the LBG algorithm fordesigning a VQ [Lind80](see also Section1.2)

and hasthe samedrawbacks: itconvergestoalocal optimum andthe

convergencecan be very slow,dependingon the initialcodebook, the

number of representation symbols, and the required accuracy. Thus,

ndinga good codebook usingthis algorithmis atrial-and-error

pro-cedure.

Although both methods have been successfully applied, their

disad-vantageisthattheyareessentiallynon-constructive. The rstmethod

justpicksarandomcode; thesecondmethodpicksarandomcodeand

tries to improve it. A rst constructive design method was given by

MarcellinandFischer[Marc90a ],whomaptherepresentationsymbols

deterministicallyontothetrellisaccordingtoaconvolutionalcode

(in-terestingly, itwasobservedalready in [Free88] that optimized

uncon-strained trellis codes tend to have a great deal of regularity, but the

link to convolutional codes was not made). The performance of the

TWCs of [Marc90a ] in general is good and in some cases superior to

all previous results from the quantization literature, which was our

reason for investigating new constructionsof TWCs.

2.2.2 The Fake Process Approach

The new TWC constructions are based on the fake process approach

of [Lind78]. Using this approach, one tries to imitate the original

source by a \fake process," which is generated by a random walk

through atime-invariant trellis. The sequence of representation

sym-bolsproducedinthiswayshouldhavethesamestatisticsasthesource.

In particular,asshown in[Lind78], asanecessary(but not sucient)

(27)

spec-@ @ @ @ @ @ @ @ @ {{ {{ A B C D

Figure 2.2: Trellis diagram of a general 2-stateTWC.

memory,abetterperformanceisobtainedbyincorporatingtheTWCs

intoapredictivecoding scheme(suchasdescribedin[Ayan86])which

decorrelates(whitens)thesourcesamples(seealsoSection3.5). Thus,

since it is assumed that the sequence of source sampleshas a at (or

white) spectrum, the representation sequence should also have this

property. While for randomly populated, time-varying trellises this

requirement is ful lled by de nition, for deterministically populated,

time-invarianttrellisesitisnot. Therefore,inorderto ndouthowto

generate white representationsequences, astudyis madeof the

spec-trum,i.e.theautocorrelation,ofsequencesgeneratedbytime-invariant

trellises with uncorrelatedinputs.

As an example, consider the 2-statetrellis shown inFigure 2.2. The

spectrum of agenerated sequence fx

k g is at if R()=E[x k x k+ ]=0; (2.1)

for jj>0. Writing out (2.1) for  =1 gives

R(1) = AP(A)fAP(AjA)+BP(BjA)g

+BP(B)fCP(CjB)+DP(DjB)g

(28)

Since independentinputs are assumed(2.2) simpli es to R(1) = AP(A) P(A)+P(B) fAP(A)+BP(B)g+ BP(B) P(C)+P(D) fCP(C)+DP(D)g+ CP(C) P(A)+P(B) fAP(A)+BP(B)g+ DP(D) P(C)+P(D) fCP(C)+DP(D)g =

fAP(A)+CP(C)gfAP(A)+BP(B)g

P(A)+P(B)

+

fBP(B)+DP(D)gfCP(C)+DP(D)g

P(C)+P(D)

: (2.3)

Finally, substituting a = AP(A);b = BP(B);c = CP(C);and d =

DP(D)into(2.3) results in R(1)= a+c b+d !  (a+b)=fP(A)+P(B)g (c+d)=fP(C)+P(D)g ! : (2.4)

Thus R(1) can be written as the inner (dot) product of two vectors.

To ensurethat R(1)0 itsuces that one of the vectorsin (2.4) be

the zero vector. So

fb = a^d= cg_fc= a^d= bg: (2.5)

Interpreting the two solutions in (2.5) shows that the rst solution

impliesthatthebranchesemanatingfromastateshouldhaveopposite

values and the second solution implies that the branches entering a

state shouldhaveoppositevalues,provided thatthe twobranches are

selected(bytheencoder)withequalprobability. Thereadercaneasily

verify that (2.5) also guarantees that R() is zero for > 1. The

2-stateexamplecanbeextendedasfollowsfortrellisescorrespondingto

q-ary shift registers.

Consider a TWC having q  states S l ,1  l  q  , with q branches n

(29)

                P P P P P P P P P P P P P P P P                 Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @            S S S S S S S S S S Sw w w w w w w w w w w w w w w w                 P P P P P P P P P P P P P P P P                 Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @            S S S S S S S S S S Sw w w w w w w w w w w w w w w w Y 1 Y 2 Y 2 Y 1 Y 3 Y 4 Y 4 Y 3 Y 5 Y 6 Y 6 Y 5 Y 7 Y 8 Y 8 Y 7 Y 1 Y 2 Y 3 Y 4 Y 5 Y 6 Y 7 Y 8 Y 2 Y 1 Y 4 Y 3 Y 6 Y 5 Y 8 Y 7 W 1 W 9 W 2 W 10 W 3 W 11 W 4 W 12 W 5 W 13 W 6 W 14 W 7 W 15 W 8 W 16 W 1 W 2 W 3 W 4 W 5 W 6 W 7 W 8 W 9 W 10 W 11 W 12 W 13 W 14 W 15 W 16 S 1 S 1 S 2 S 2 S 3 S 3 S 4 S 4 S 5 S 5 S 6 S 6 S 7 S 7 S 8 S 8 (a) (b)

Figure 2.3: Trellis diagram of an 8-state trellis waveform coder

for q =2: (a) the states are numbered S

l

, 1  l  8; the branches

have representation symbols W

k

, 1 k  16, and (b) example of the

symmetryof the underlying convolutionalcode.

branchfromstateS

dl=qe +rq  1,0r q 1,tostateS l isassignedthe representation symbol W l+rq 

,where dte denotes the smallestinteger

not less than t. The rate, R, equals n b/sample. As an example, in

Figure 2.3(a) an 8-state TWC with branch values W

k

and states S

l

is shown for q = 2. As derived in Appendix A, assuming all trellis

branchesareselectedwithequal probability(itisshowninSection3.7

that this is a good approximation) the autocorrelation of the fake

process generated by a random walk through the trellis and denoted

byR(), can be writtenas R()= q (++1)  q  +1 X i=1 2 4 0 @ q  X j=1 W i+(j 1)q  +1 1 A  0 @ q  X j=1 W (i 1)q  +j 1 A 3 5

(30)

for 1   +1. For obtaining R() =0, according to (2.6) there

are two trivial solutions (the equivalents of (2.5)):

q  X j=1 W i+(j 1)q  +1 =0, and (2.7) q  X j=1 W (i 1)q  +j =0; (2.8) for 1     +1and1  i  q  +1 .For  = 1, (2.7) and

(2.8), respectively, state that the sum of the values of the branches

entering or leavingeach state should be zero, in order for R()to be

zero. Based on this observation, in [Vleu91 ], for q = 2, TWCs were

constructed and their performances evaluated. It was found,

how-ever,thatTWCsbasedon convolutionalcodes|inparticularthose of

rate1=|haveabetterperformance. Theyuseaone-to-onemapping

from the convolutional-code symbols to the representation symbols.

The generalization toq =2 n

, developedhere, assumesthe underlying

rate 1= q-ary convolutional code has a symmetry of its q 

di erent

branchsymbolsY

m

as speci ed by the following set of equations:

W

1+(q 

(2r(mmod2) r+(m 1)modq)+q((m 1)divq)+r)modq

+1 =Y

m

; (2.9)

for 1m q 

,0r q 1. ForTWC, the branch values Y

m

rep-resent real numbers, of course. Examples of the symmetry are shown

in Figure 2.3(b) and Figure 2.4, for q = 2 and q = 4, respectively.

Sincethe underlyingconvolutionalcodedoesnotneed tobeexplicitly

speci ed (contrary to [Marc90a ], where Ungerboeck's codes [Unge82]

were assumed), no actual convolutional code is required for the

con-struction. In fact, there are many convolutional codes that t (2.9).

For q = 2, for example, the convolutional codes used in [Zeha90 ] for

Quasi-Orthogonal and Super-Orthogonal codes of degree 1 t (2.9);

their generatorpolynomials are: g

1 (x)=1+x  , andg j (x)= x j 1 , for

(31)

Y 16 Y 15 Y 14 Y 13 Y 15 Y 16 Y 13 Y 14 Y 14 Y 13 Y 16 Y 15 Y 13 Y 14 Y 15 Y 16 Y 12 Y 11 Y 10 Y 9 Y 11 Y 12 Y 9 Y 10 Y 10 Y 9 Y 12 Y 11 Y 9 Y 10 Y 11 Y 12 Y 8 Y 7 Y 6 Y 5 Y 7 Y 8 Y 5 Y 6 Y 6 Y 5 Y 8 Y 7 Y 5 Y 6 Y 7 Y 8 Y 4 Y 3 Y 2 Y 1 Y 3 Y 4 Y 1 Y 2 Y 2 Y 1 Y 4 Y 3 Y 1 Y 2 Y 3 Y 4 Y 16 Y 15 Y 14 Y 13 Y 12 Y 11 Y 10 Y 9 Y 8 Y 7 Y 6 Y 5 Y 4 Y 3 Y 2 Y 1 Y 15 Y 16 Y 13 Y 14 Y 11 Y 12 Y 9 Y 10 Y 7 Y 8 Y 5 Y 6 Y 3 Y 4 Y 1 Y 2 Y 14 Y 13 Y 16 Y 15 Y 10 Y 9 Y 12 Y 11 Y 6 Y 5 Y 8 Y 7 Y 2 Y 1 Y 4 Y 3 Y 13 Y 14 Y 15 Y 16 Y 9 Y 10 Y 11 Y 12 Y 5 Y 6 Y 7 Y 8 Y 1 Y 2 Y 3 Y 4

Figure 2.4: Example of the symmetry of the underlying

(32)

2.2.3 New Constructions

Thecorrespondence between therepresentationsymbolsandthe

sym-bols of the underlying convolutional code is not uniquely speci ed.

Therefore,threeconstructionsareconsidered: constructionA,a

\triv-ial" construction (based on (2.7)) (because of the structure of the

underlying convolutional code, as given by (2.9), (2.7) and (2.8) are

equivalent), and constructions B and C, two \non-trivial"

construc-tions. Theyall result ina whitespectrum, forequal branch

probabil-ities.

For construction A, in addition to the symmetry speci ed by (2.9),

the representation symbolshave the following relation:

Y 2k = Y 2k 1 ; (2.10) for 1  k  q 

=2. By combining (2.10) and (2.9), it follows

imme-diately from (2.8) or (2.7) that the construction results in a white

spectrum. An example of the construction is shown inFigure 2.5(a).

In the example, Y 1 = A, Y 2 = A, Y 3

= B, etc. Interestingly, the

constructionissimilartothatoftheSuper-Orthogonalcodesofdegree

1, asde ned in[Zeha90 ], which are designed fortrellis-coded

modula-tion (TCM) (TCM is describedin Part II of this thesis).

For construction B, in addition to the symmetry speci ed by (2.9),

the representation symbolshave the following relation, for > 1:

Y k 1+q  =2+q 2((k 1)modq) = Y k ; (2.11) for 1  k  q 

=2. The proof that the construction results in a white

spectrum is givenin Appendix B. An example of the construction is

showninFigure2.5(b). Now,Y

1 =A,Y 2 =B,Y 3 =C,etc.

Construc-tion B is the construction that was initially developedin[Vleu91 ].

Finally, for construction C, in addition to the symmetry speci ed by

(2.9),therepresentationsymbolshavethefollowingrelation,for>1:

(33)

                P P P P P P P P P P P P P P P P                 Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @            S S S S S S S S S S Sw w w w w w w w w w w w w w w w                 P P P P P P P P P P P P P P P P                 Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @            S S S S S S S S S S Sw w w w w w w w w w w w w w w w                 P P P P P P P P P P P P P P P P                 Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @            S S S S S S S S S S Sw w w w w w w w w w w w w w w w A B B A C D D C B A A B D C C D A A A A B B B B C C C C D D D D A B B A A B B A C D D C C D D C (a) (b) (c)

Figure 2.5: Examplesof the proposed constructions for q=2, =3:

(a) constructionA, (b) construction B, and(c) constructionC.

for 1  k  q 

=2. The proof that the construction results in a white

spectrum is givenin Appendix C. An example of the construction is

shown in Figure 2.5(c). In this case, Y

1 =A, Y 2 =B, Y 3 = A,etc. 2.3 Trellis-Coded Quantization

Inspired by Ungerboeck's trellis-coded modulation (TCM) technique

knownincommunicationtheory[Unge82 ,Unge87a ,Unge87b ](seealso

Part II of this thesis), Marcellin and Fischer [Marc90a] recognized

that TWC can be improved by a technique which they call

trellis-coded quantization (TCQ). It is similar to TWC, but, instead of a

singlecodebook element,the nitestatemachineinthis casespeci es

aset ofcodebook elements. Theencoder nowinvestigatesall allowed

(34)

@ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ yy y yyy y y yyy 0.25 0.75 0.25 0.75 1.25 1.75 -1.75 -1.25 -0.75 -0.25 0.25 0.75 1.25 1.75 0.25 -1.25 -0.75 -0.25 @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ @ y y yyy D 1 D 1 D 1 D 1 D 1 D 2 D 2 D 2 D 2 D 3 D 3 D 3 D 3 (b) (a)

Figure 2.6: Trellis diagram of a 2-state 2-bit-per-sample

trellis-coded quantizerfor q=2: (a) thegenerictrellis,and(b)the trellis

for the source sequenceshown inTable 2.1 (on p.11).

An example of TCQ is given in Figure 2.6. In Figure 2.6(a), the

trellis codebook consists of four sets: D

0 = f 1:75;0:25g, D 1 = f 1:25;0:75g, D 2 = f 0:75;1:25g, and D 3 = f 0:25;1:75g.For

each branch the set member closest to the source value is selected;

Figure 2.6(b) shows the trellis for the source symbols listed in T

a-ble2.1 (onp.11). The trellisisthensearchedforthebestpath,which

has been indicated in bold type in Figure 2.6(b); the total distortion

equals 0.87. The ratefor this exampleistwobit persample sinceone

bit isused tospecify whichsetmemberhas been selectedand one bit

is used tospecify which branch has been chosen.

The TWC constructionsof Section2.2.3 are easily extendedto TCQ.

Consider again the trellis havingq  states S l ,1l q  ,with q =2 n

branchesenteringand leavingeachstate. Now,the branchfrom state

S  1,0rq 1,tostateS is assignedtheset W 

(35)

quantizing at R b/sample (R =n;n+1;:::), each set contains 2 R n

representation symbols.

Y

m

now denotes the set fy

m;1 ;y m;2 ;:::;y m;2 R ng and Y m is

used to denote the set f y

m;1 ; y m;2 ;:::; y m;2 R ng. As an

ex-ample, in Figure 2.5 replace A by fa

1 ;a 2 ;:::;a 2 R ng, A by f a 1 ; a 2 ;:::; a 2 R ng, etc.

Constructions A, B, and C again give a white spectrum, assuming

thatall setmembersare usedwith thesame probability; theproofs of

Appendix B and Appendix Care easily extendedto TCQ.

2.4 Trellis-Coded Vector Quantization

In[Fisc91 ],trellis-codedvectorquantization(TCVQ)wasinvestigated.

WhileforTCQthebranchsetscontainscalars,forTCVQtheycontain

vectors. Thus, TCQ can be seen as 1-dimensional TCVQ.

The TCQ constructions of Section 2.3 can be extended to TCVQ as

follows. Consideragainthetrellishavingq  statesS l ,1l q  ,with q =2 n

branches entering and leaving each state. Again, the branch

from state S dl=qe+rq  1,0  r  q 1, to state S l is assigned the set W l+rq 

. Now, for quantizing at R b/sample using N-dimensional

representation vectors,each setcontains 2 NR n

vectors.

Y

m

denotesthesetofN-dimensionalvectorsfy

m;1 ;y m;2 ;:::;y m;2 NR ng and Y m

isusedtodenotethesetf y

m;1 ; y m;2 ;:::; y m;2 NR ng.As

an example,in Figure 2.5 replace A by

8 > > > > < > > > > : (a 1;1 a 1;2 ::: a 1;N ) (a 2;1 a 2;2 ::: a 2;N ) . . . . . . . . . . . . (a NR n a NR n ::: a NR n ) 9 > > > > = > > > > ; ;

(36)

A by 8 > > > > < > > > > : ( a 1;1 a 1;2 ::: a 1;N ) ( a 2;1 a 2;2 ::: a 2;N ) . . . . . . . . . . . . ( a 2 NR n ;1 a 2 NR n ;2 ::: a 2 NR n ;N ) 9 > > > > = > > > > ; ; etc.

It should be noted that, in general, constructions A, B, and C no

longer guarantee a white spectrumfor TCVQ. A white spectrumcan

be guaranteed, however,by forcingthe representation vectors tohave

a certain structure. This was done for the case of q =2, N =2, and

R =1=2, for the Laplacian source,in [Vleu92 ], but the performances

obtained for this case are lower than the performances obtained for

the constructions proposed in this thesis, which use unconstrained

representation vectors. As argued in [Eyub93], this observation is

true ingeneral: althoughstructured quantizerscan beasymptotically

optimal forlarge dimensions,forsmall dimensionstheyare inferiorto

unconstrained quantizers. Experimentsperformedwiththe optimized

TCVQsshowthattheydogenerate awhitespectrum(astheyshould,

sincegeneratingawhitespectrumisanecessaryconditionforthefake

process, as wasshown in[Lind78]).

2.5 Conclusions

Threedi erentconstructionsofTWCs, TCQs,andTCVQshavebeen

proposed. They are based on a fake process approach. By enforcing

certainsymmetryproperties,ithas beenguaranteedfortheTWCand

TCQ constructions that a random walk through the trellis results in

anuncorrelatedsignal,irrespectiveoftheactualtrelliscodebook. This

cannot be guaranteed for the TCVQconstructions.

(37)

construc-the trellis are based on underlying convolutional codes, the

(38)
(39)

Performance Evaluation

3.1 Introduction

In Chapter 2, three new constructions of TWCs, TCQs, and TCVQs

havebeen proposed. In this chapter,the quantizers will be optimized

forspeci csourcesandtheirperformanceswillbedetermined. Soasto

be able tocompare the performances with results from the

quantiza-tionliterature,memorylessLaplacian,Gaussian, anduniformsources,

as wellas an AR(1) Gauss-Markov source,are used.

Section 3.2 discusses some issues that have to be resolved before

ac-tual experiments can be performed. Section 3.3 then evaluates the

TWCsand TCQsandthe sameisdoneinSection3.4for theTCVQs.

Section 3.5 discusses the application of TWCs and TCQs to

Gauss-Markov sources. The performance of the M-algorithm, a well-known

reduced-state search algorithm, isdetermined in Section3.6 and

Sec-tion 3.7 discusses why the newly constructed TCQs outperform the

previous TCQ construction of [Marc90a]. Section 3.8, nally,

(40)

3.2 Preliminaries

3.2.1 Implementation Complexity

In order to make a fair comparison of the various quantizers, they

shouldbecomparedatthesamerateand complexity. Thecomplexity

de nition criticallydependson the kindof implementationone has in

mind. In particular, two extreme cases can be distinguished, viz., a

low-speed serial (or software) implementation and a high-speed

par-allel implementation. For the serial implementation, the complexity

de nition of [Marc90a ], i.e. the number of multiplications, additions,

and comparisons, is suitable, but for a parallel implementation the

complexity de nitionof [Forn73 ,Vite79,Unge87b ], i.e.the numberof

state transitions inthe trellis, ismore appropriate. The latter

de ni-tionisusedhere. Thus,thecomplexityequalstheproductofthe

num-berofstates,the numberofbranches (sets)perstate,andthe number

of (1- or multi-dimensional) vectorsperset: q  q2 NR n =2 n+NR . 3.2.2 Training Sequences

To determine the performances of the proposed quantizers,

ex-periments have been performed for samples from memoryless

uni-form, Gaussian, and Laplacian sources, the probability-density

func-tions (PDFs) of which respectivelyare:

f(x)= ( 1 2 p 3 if jxj< p 3 0 otherwise ; (3.1) f(x)= 1  p 2 e x 2 2 2 ,and (3.2) f(x)= 1 p e jxj p 2  ; (3.3)

(41)

where  2

is the variance. In the experiments, the Gaussian and

Laplacian sources have  2

= 1, while the uniform source has  2

=

4=3. The gure of merit is the signal-to-noise ratio (SNR),

de- ned as 10log

10

(S=D) dB, where S is the source variance and D

is the quantization error variance (the distortion). The C-language

routines used for generating the random samples use the ran3

function from [Pres88], which implements a routine suggested by

Knuth [Knut81]; ran3 returns a uniform random deviate between 0

and 1. The routine for generating samples from the Gaussian

distri-bution implements the direct method suggested in [Abra65] and the

routine for generating samples from the Laplacian distribution is the

inverse of the integralof (3.3).

Fortheexperiments,atrainingsetofN100000 independentrandom

vectors(N 2

100000i.i.d. samples)isused. Thereasonforthisisthat

100 000 samples, as used in [Marc90a ], turned out not to be enough

for TCVQ, in several experiments. Therefore, as a rule of thumb,

N 2

100 000 samples are used and the nal performance ismeasured

on an i.i.d. sequence not in the training set. It should be remarked

thatfor100000 i.i.d.samples(aswerealsousedin[Marc90a]),forthe

TWCandTCQexperiments,theperformancesobtainedforsequences

notinthetrainingsetarethesameasforsequencesinsidethetraining

set.

3.2.3 Con dence Intervals

To enable the computation of the signi cance, or reliability, of

the computed SNR values, the samples are divided into 100

se-quences (each consisting of N 1000 random vectors). To compute

the con dence intervals, for each of the T = 100 experiments both

the source variance S

i

and noise variance D

i

are considered to be

random variables. The con dence interval is overestimated in this

way, since S

i

in reality is known exactly. The total source and

noise variances are computed as S = 1 P T i=1 S i and D = 1 P T i=1 D i .

(42)

Since each experiment involves N  1000 vectors, it is valid to

as-sume that S

i

and D

i

are normally distributed. Thus for S and D

the 100% con dence intervalsare (S z

 S = p T;S+z  S = p T) and (D z  D = p T;D+z  D = p T), where  2 S = 1 T 1 P T i=1 (S i S) 2 ,  2 D = 1 T 1 P T i=1 (D i D) 2 ,and z

is chosensuch that

Z z z f T 1 (y)dy= ; (3.4) wheref T 1

(y)isthePDFofStudent'st-distributionwithT 1degrees

of freedom [LG89 ]. The probability of both S and D being inside

their respective con dence intervals is  and the resulting 2



100% con dence intervalfor S=D is:

(S=D)2 S z  S = p T D+z  D = p T ; S+z  S = p T D z  D = p T ! : (3.5) For 2 =0:95, z

=2:27, as can be obtained by solving (3.4), either

numerically (used here)or by table lookup ( 0:975).

3.2.4 Codebook Optimization

To optimize the codebook, 100 iterations were performed using an

algorithm based on that described in [Stew82], but adapted to

main-tain the structures prescribedby the respective constructionsand

ex-tended to TCQ and TCVQ. Although convergence is reached in less

than 100 optimization steps for small trellises atlow rates, large

trel-lises at higher rates require about 100 steps, in our experience. The

optimization algorithm is listed inFigure 3.1. In [Stew82 ], in Step 2,

eachrepresentationsymbolofgeneration k+1isthecentroidof those

elements of the training sequence that were encoded by the

corre-spondingrepresentationsymbolofgenerationk. Fortheconstructions

presented in this thesis, the same sets of representation symbols Y (k)

m

and Y (k)

m

eachoccuratqbranchesofthetrellis. Therefore,inStep2,

noweachrepresentationsymbolofY (k+1)

m

(43)

Step 0. Initialization. Given are a training sequence and the initial codebook, C (0) . Set k=0. Step 1. Using C (k)

, the codebook for generation k, encode the

training sequence.

Step 2. Findtheoptimal codebook,C (k+1)

,forgenerationk+1.

Step 3. If k<99, then replace k by k+1 and goto Step 1.

Step 4. Halt with C (100)

as the nal codebook.

Figure 3.1: Codebook optimizationalgorithm.

occurrences of the corresponding representation symbols of Y (k)

m and

the negatives of those elements of the training sequence that were

encoded by any of the q occurrences of the corresponding

representa-tion symbolsof Y (k)

m

. Representationsymbols ontowhichno source

symbols are mapped are updated to zero (the average source value).

Althoughtheconvergenceofthealgorithmisslow,thecodebooksthat

are nally obtained inthe experiments give good results.

For TWC and TCQ, the initial trellis codebooks are chosen

deter-ministically using uniformly spaced levels from the interval ( 2;2).

Contrary to a random initialization, this choice of initial codebooks

guarantees a certain minimal distance both inside each set and

be-tween the sets of the branches entering and leaving each state. The

same initial codebooks are used for all sources. The speci c

initial-izations for constructions A, B, and C can be found inAppendix D.

(44)

3.3 TWC and TCQ Experiments

For TWC and TCQ at R =1, R =2, R = 3, and R = 4, the SNR

results are listed in Table 3.1, Table 3.2, and Table 3.3, for

quanti-zation of the Laplacian, Gaussian, and uniform sources, respectively.

Forall SNRvalueslisted,the95%con denceintervalcorrespondstoa

tolerance of no more than 0.003 dB(this result di ers from the

toler-ances givenin[Marc90a ]which rangefrom0.02to0.15 dB;apossible

explanation is that in [Marc90a] it is incorrectly assumed that the

source variance is the same for each of the 100 parts of the training

sequence). For R=1, n equals 1, for R =2,n equals 1 or 2,and for

R =3 andR =4, nequals 1,2,or3(for \pure"TWC,R =n). Note

that the numbers ofstates in the experiments have been restricted to

be powersof q, soas to have an underlying q-aryconvolutional code.

The constructions are easily extended to di erent numbers of states,

however.

TWCsandTCQsatthesamerate,havingthesame numberofstates,

havethesame complexity,accordingtothede nition ofSection3.2.1.

When comparing the SNR results listed in Table 3.1, Table 3.2, and

Table 3.3atthe samecomplexities, itcanbeobservedthat,generally,

construction C gives the best performance (except for the Laplacian

source at R = 1), although the di erences with the other

construc-tions are small. It can also be observed that, generally, the

perfor-mances decrease as the number of (di erent) representation symbols

is decreased (i.e. as q is increased). The TCQs thus outperform the

TWCs, but the di erencesdecreaseasthe complexity (or the number

of representation symbols) increases.

In [Vleu93a], it was shown that at the same number of states, i.e.

at the same complexity, the proposed construction-B TCQs

outper-formthe TCQsof [Marc90a ], forthe Laplacianand Gaussian sources.

For the uniform source the performances of the proposed TCQs

(45)

R q States Compl. Symb. A B C 1 2 4 8 4 3.98 4.35 4.33 2 8 16 8 4.31 4.82 4.83 2 16 32 16 4.76 5.16 5.10 2 32 64 32 5.13 5.39 5.35 2 64 128 64 5.51 5.54 5.54 2 128 256 128 5.65 5.69 5.68 2 256 512 256 5.81 5.85 5.79 2 2 16 64 32 10.62 10.63 10.68 2 64 256 128 11.20 11.24 11.27 2 256 1024 512 11.58 11.59 11.65 4 16 64 16 10.29 10.28 10.38 4 64 256 64 11.15 11.15 11.20 4 256 1024 256 11.55 11.56 11.67 3 2 64 512 256 17.11 17.18 17.16 4 64 512 128 17.11 17.11 17.12 8 64 512 64 16.75 16.84 16.86 4 2 64 1024 512 23.00 22.92 22.97 4 64 1024 256 22.97 22.98 22.97 8 64 1024 128 22.69 22.73 22.78

Table 3.1: Experimental SNRs (in dB), complexities, and number

of (different) representation symbols for constructions A, B, and

C,forTWC/TCQ ofthe Laplaciansource atR=1,R=2,R=3,and

(46)

R q States Compl. Symb. A B C 1 2 4 8 4 4.78 5.02 5.05 2 8 16 8 4.97 5.16 5.19 2 16 32 16 5.20 5.30 5.30 2 32 64 32 5.31 5.39 5.39 2 64 128 64 5.43 5.49 5.49 2 128 256 128 5.49 5.56 5.56 2 256 512 256 5.56 5.63 5.61 2 2 16 64 32 10.88 11.00 11.05 2 64 256 128 11.22 11.29 11.28 2 256 1024 512 11.44 11.45 11.48 4 16 64 16 10.81 10.89 10.97 4 64 256 64 11.21 11.30 11.18 4 256 1024 256 11.41 11.38 11.48 3 2 64 512 256 17.21 17.24 17.24 4 64 512 128 17.18 17.19 17.23 8 64 512 64 17.02 17.08 17.10 4 2 64 1024 512 23.14 23.16 23.16 4 64 1024 256 23.12 23.14 23.15 8 64 1024 128 23.01 23.04 23.03

Table 3.2: Experimental SNRs (in dB), complexities, and number

of (different) representation symbols for constructions A, B, and

C, for TWC/TCQ of the Gaussiansource atR=1, R=2, R=3, and

(47)

R q States Compl. Symb. A B C 1 2 4 8 4 6.14 6.22 6.25 2 8 16 8 6.20 6.30 6.32 2 16 32 16 6.27 6.37 6.37 2 32 64 32 6.33 6.42 6.43 2 64 128 64 6.39 6.48 6.47 2 128 256 128 6.46 6.51 6.52 2 256 512 256 6.50 6.55 6.56 2 2 16 64 32 12.64 12.76 12.78 2 64 256 128 12.79 12.86 12.90 2 256 1024 512 12.88 12.96 12.98 4 16 64 16 12.61 12.71 12.77 4 64 256 64 12.77 12.84 12.90 4 256 1024 256 12.87 12.91 12.98 3 2 64 512 256 19.01 19.12 19.13 4 64 512 128 19.02 19.09 19.13 8 64 512 64 19.00 19.04 19.08 4 2 64 1024 512 25.14 25.27 25.27 4 64 1024 256 25.16 25.23 25.25 8 64 1024 128 25.19 25.17 25.21

Table 3.3: Experimental SNRs (in dB), complexities, and number

of (different) representation symbols for constructions A, B, and

C, for TWC/TCQ of the uniform source at R=1, R=2, R =3, and

(48)

R 1 2 3 States VW MF VW MF VW MF 8 4.83 4.47 10.18 9.56 15.87 15.00 16 5.16 4.92 10.68 10.47 16.48 16.20 32 5.39 5.13 10.98 10.73 16.90 16.43 64 5.54 5.35 11.27 10.98 17.18 16.79 128 5.69 5.49 11.44 11.16 17.43 16.84 256 5.85 5.54 11.67 11.22 17.57 16.96 512 5.95 | 11.81 | | | LM 3.01 3.01 7.54 7.54 12.64 12.64 RD 6.62 6.62 12.66 12.66 18.68 18.68

Table 3.4: SNRs (in dB) of the proposed TCQs (VW) compared

with the TCQs of [Marc90a] (MF), the Lloyd-Max quantizer

perfor-mance (LM), and the rate distortion bound (RD), for the Laplacian

source at R=1, R=2,and R=3.

of the proposed TCQs (the best results listed in Table 3.1 and T

a-ble 3.2) with the best results obtained in [Marc90a ], the Lloyd-Max

quantizer ([Lloy82 , Max60]) performance, and the rate distortion

bound [Berg71 ]. For the cases not listed in Table 3.1 and Table 3.2,

the results have been obtained for construction B.

For the Laplacian and Gaussian sources, the proposed TCQs in fact

improve upon all previous results found in the literature (as listed

in [Marc90a ]), asshown inTable 3.6.

3.4 TCVQ Experiments

For TCVQ, the initial trellis codebooks are chosen randomly using

i.i.d. samplesfromthedistributiontobequantized,bothbecausegood

(49)

(al-R 1 2 3 States VW MF VW MF VW MF 8 5.19 5.19 10.83 10.70 16.64 16.33 16 5.30 5.27 11.05 10.78 16.90 16.40 32 5.39 5.34 11.14 10.85 17.11 16.47 64 5.49 5.43 11.30 10.94 17.24 16.56 128 5.56 5.52 11.41 10.99 17.39 16.61 256 5.63 5.56 11.48 11.04 17.43 16.64 512 5.68 | 11.57 | | | LM 4.40 4.40 9.30 9.30 14.62 14.62 RD 6.02 6.02 12.04 12.04 18.06 18.06

Table 3.5: SNRs (in dB) of the proposed TCQs (VW) compared

with the TCQs of [Marc90a] (MF), the Lloyd-Max quantizer

perfor-mance (LM), and the rate distortion bound (RD), for the Gaussian

source at R=1, R=2,and R=3.

Source R States VW LIT

Lapl. 1 512 5.95 5.76 2 512 11.81 11.45 3 256 17.57 17.20 Gauss. 1 512 5.68 5.56 2 512 11.57 11.04 3 256 17.43 16.78

Table3.6: SNRs(in dB) of the proposedconstruction-B TCQs (VW)

compared with the performances found in the literature (LIT,as

listedin[Marc90a]),for theLaplacianandGaussiansourcesatR=1,

(50)

N q Compl. Symb. Laplacian Gaussian uniform 2 2 256 128 5.65 5.46 6.47 4 256 64 5.69 5.53 6.53 3 2 1024 256 5.65 5.46 6.46 4 1024 128 5.79 5.55 6.52 8 1024 64 5.78 5.56 6.54 4 2 2048 512 5.69 5.46 6.46 4 2048 256 5.85 5.55 6.52 8 2048 128 5.84 5.56 6.54

Table 3.7: Experimental SNRs (in dB), complexities, and number

of (different) representation symbols for 64-state construction-C

TCVQof the Laplacian, Gaussian,anduniformsourcesatR=1,for

severalvalues of N and q.

approximately white spectrum. Table 3.7 lists the performances of

several64-stateconstruction-C TCVQs atR=1;the 95% con dence

intervalscorrespondtoatoleranceofnomorethan0.003dB.Itcanbe

observed that, contraryto the results givenin Section3.3 for N =1,

for the Gaussian and uniform sources, the performances increase as

q is increased, eventhoughthe number of representationsymbols

de-creases with q. For the Laplacian source, q = 8 achieves virtually

the same performance as q = 4, using half as many representation

symbols. Further, for the Gaussian and uniform sources, it can be

observedfromTable 3.7 thatincreasing the numberof representation

symbols, or their dimension, beyond a certain value does not result

in a higher performance; the same performance can be obtained at a

lower complexity,by using lower-dimensional representation symbols.

To further investigate the in uence of the representation symbol

di-mensionontheTCVQperformance,experimentshavebeenperformed

for construction C, for several rates and dimensions, at a constant

complexity. Table 3.8 lists the SNRs obtained for the experiments

with a complexity of 256 at R =1=2, R =1, R = 2, and R =3; the

(51)

Source

R N q States Symb. Laplacian Gaussian Uniform

1/2 2 2 128 128 2.96 2.72 3.09 4 4 64 64 3.00 2.74 3.11 8 4 16 64 2.97 2.64 2.99 1 1 2 128 128 5.68 5.56 6.52 2 4 64 64 5.70 5.53 6.50 4 4 16 64 5.58 5.41 6.43 2 1 2 64 128 11.27 11.28 12.90 2 4 16 64 10.78 11.03 12.78 3 1 2 32 128 16.85 17.06 19.04 2 2 4 128 15.68 16.26 18.65

Table 3.8: Experimental SNRs (in dB) and number of (different)

representationsymbolsfor construction-C TCVQof the Laplacian,

Gaussian,and uniform sources at R=1=2, R=1, R=2, andR=3,at

a complexityof 256.

and complexity, increasing N while not simultaneously increasing q

decreases the performance, whereas simultaneously increasing N and

q can increase the performance. In Table 3.8, those performance

in-creases occur in particular in those cases where no parallel branches

are used in the trellis. In Table 3.7 as well, increasing q in general

increases the performance. The explanation for the observation that

increasingqdoesnotalways increasetheperformance(aswasalso

ob-servedinSection3.3)couldbetheassociatedreductionofthe number

of representation symbols.

TheobservationthatsimultaneouslyincreasingN andq,ataconstant

rate and complexity (whilenot using parallel branches), increasesthe

performance agreeswiththetheoreticalboundon thedistortion given

for thiscase inequation (7.4.37)of[Vite79], which, inthe notation of

this thesis, is:

DD(R n )+ d 0 q (Rn Rn(D))=C0 (1 q (R n R n (D)) 2 =2R n C 0 ) 2 e N(+1)Rn(Rn Rn(D))=C 0 ;

(52)

Complex. TCQ:MF TCVQ:FMW TCVQ:VW TCQ:VW

32 4.92 5.05 5.15 5.16

64 5.13 5.22 5.34 5.39

Table 3.9: SNRs (in dB), at the same complexity, for the TCQs

of [Marc90a](TCQ:MF),the TCVQs of [Fisc91](TCVQ:FMW), the

pro-posed q =2 construction-B TCVQs (TCVQ:VW), and the proposed

q=2 construction-B TCQs (TCQ:VW), for the Laplacian source, at

R=1. For theTCVQs, N=2.

whereR

n

istherate,expressedinnatspersymbol,D(R

n

)andR

n (D)

are the performance bounds given by rate distortion theory (see also

Chapter 4)and d

0

andC

0

areconstants. Since,ataconstantrateand

complexity,N(+1)isconstant,the exponentofthe bounddoesnot

depend on q. The fraction, however,does decreasewith q, explaining

the performance increaseas q increases.

In [Fisc91], two experiments were presented for a memoryless

Lapla-ciansource,atR =1. Table3.9showsacomparison,atthesame

com-plexities, of the performances of the TCQs of [Marc90a ], the TCVQs

of [Fisc91 ], the proposed TCVQs (q = 2, construction B), and the

proposed TCQs (q = 2, construction B). The proposed TCVQs

out-perform those of [Fisc91 ], but the proposed TCQs are still superior.

In [Wang92], di erent TCVQs and more results were presented. The

SNRs presented in [Wang92] were computed inside the training

se-quenceof1000000samplesofamemorylessGaussiansource. To

com-paretheperformancesoftheproposedTCVQswiththoseof[Wang92],

experiments were performed with the proposed TCVQs, also

us-ing 1 000 000 samples, for several cases selected from the tables

in [Wang92]. The performances were measured both inside and

out-side the training set. Table 3.10, in which the proposed TCVQs are

compared with those of [Wang92], clearly shows that, in the case of

(53)

VW WM

R N q Inside Outside Inside

0.5 2 2 2.62 2.62 2.63

1 4 4 5.42 5.41 5.33

2 4 4 11.20 11.09 11.22

3 2 4 16.90 16.89 16.62

Table 3.10: SNRs (in dB), inside and outside the training set, for

the proposed16-state construction-C TCVQs (VW)and the 16-state

TCVQsof[Wang92](WM),fortheGaussiansource,forseveralrates,

R,anddimensions, N.

3.5 Gauss-Markov Sources

Although the experiments discussed in Section 3.3 have shown that

for memoryless sources the TCQs have performances equal or

supe-rior to the TWCs, this could change for Gauss-Markov sources, since

TCQs are not optimal in this case [Marc90a]. Therefore, additional

experiments have been performed for Gauss-Markov sources, which

are de ned by x k = L X j=1 a j x k j +w k ; (3.7) where x k

is the source output, L is the order (memory length) of

the source, a

j

is a real coecient, and w

k

is a sample of a

memory-less Gaussian source as de nedby (3.2). An algorithm for predictive

TWC (theextension ofwhichtoTCQ istrivial) of Gauss-Markov (or

generalautoregressive)sourcesisgivenin[Ayan86 ] and[Gers92].

Ba-sically,foreachtrellisstate,apredictionofthecurrentinputsampleis

made, based on the previous representation symbolsof the best path

entering the state:

^ x k;l = L X j=1 ^ a j ^ x k j;l ; (3.8) where l,1  l  q 

, indicates the dependency on the state. The

(54)

Step 0. Initialization. Given are a training sequence, the

ini-tial codebook, C (0)

, and the initial prediction

coe-cients ^a (0) j ,1j L. Set k =0and t =0. Step 1. Using C (k+10t)

, the codebook for generation k+10t,

and a^ (t)

j

,encode the training sequence.

Step 2. Find the updated codebook, C

(k+1+10t)

, for generation

k+1+10t.

Step 3. If k<9, then replacek byk+1 and go toStep 1.

Step 4. Find the updated prediction coecients, ^a (t+1)

j .

Step 5. If t< 9, then replace t by t+1, set k = 0, and go to

Step 1.

Step 6. Halt with C (100)

as the nal codebook and ^a (10)

j

as the

nal prediction coecients.

Figure 3.2: Predictive codebookoptimization algorithm.

codebook and the prediction coecients (especially for lowrates, the

correlationofthe representationsymbolswillbedi erentfromthat of

the input samples).

TheoptimizationalgorithmusedhereislistedinFigure3.2. Itisbased

onthealgorithmgivenin[Ayan86],butisdi erentintwoaspects. The

rst is Step 4. The predictor update equationsused in [Ayan86 ] are:

L X j=1 ^ a j K X k=1 ^ x k j ^ x k i = K X k=1 (x k y k )^x k i ; (3.9)

for 1i L, where K is the length of the training sequence (in our

case, K = 100 000) and y

k

is the codebook element that is used to

represent the residualx

k ^ x

k

. Forlowrates, (3.9) overestimates the

(55)

that ofthe representationsequence; theestimate^a

j

divergesfromthe

realvaluea

j

. Agoodestimatefora

j isobtainedbyreplacing(x k y k ) in (3.9) by x^ k ,resulting in: L X j=1 ^ a j K X k=1 ^ x k j ^ x k i = K X k=1 ^ x k ^ x k i ; (3.10) for 1iL.

The second di erence is that in our algorithm the prediction

coe-cients are updated once every 10 codebook updates, giving a better

estimate of these coecients (in our experiments) than the method

of [Ayan86] where the prediction coecients are updated after every

codebook update. In total, 10 updates are performed for the

pre-diction coecients, bringing the total number of codebook updates

to 100.

Experiments have been performed, for construction C, for an AR(1)

source (L = 1) having a

1

=0:9. The resulting SNRs are listed in

Table 3.11, together with the results obtained in [Marc90a ], the

dif-ferentialpulsecodemodulation(DPCM,seeforexample[Jaya84 ])

per-formance,andthe ratedistortion bound. Forour SNRs,the95%

con- dence intervalscorrespond to atolerance ofno more than 0.003 dB.

Comparing the performances of the TWCs and the TCQs in T

a-ble 3.11, they can be seen to be the same. A comparison with T

a-ble 3.2 shows that the di erences between the performances of the

TWCs and the TCQs are smaller for the AR(1) source than for the

Gaussian source,butforR =2andR=3the TWCsarenot superior

to the TCQs. For example, as can be seen from Table 3.2, the

16-state construction-B TCQfor R=2has an SNRof 11.00dB andthe

correspondingTWChasan SNRof10.89dB|a di erenceof0.11 dB.

Table 3.11 shows that for the AR(1) source, the 16-state TCQ and

TWC have the same performance, i.e. 17.95 dB. A comparison with

Table 3.5 shows that the performance di erences between the

pro-posed TCQs and those of [Marc90a ] are much smaller for the AR(1)

(56)

R q States VW MF DPCM RD 1 2 4 11.34 11.19 10.00 13.23 2 8 11.63 11.60 2 16 11.82 11.89 2 32 12.08 12.13 2 64 12.19 12.22 2 128 12.31 12.41 2 256 12.38 12.49 2 2 16 17.95 17.95 16.07 19.25 2 64 18.35 18.24 2 256 18.59 18.41 4 16 17.95 | 4 64 18.34 | 4 256 18.61 | 3 2 64 24.32 23.90 21.69 25.27 4 64 24.32 | 8 64 24.27 |

Table3.11: Experimental SNRs (in dB) for predictive TWC/TCQ of

the AR(1) source for construction C (VW), at R =1, R =2, and

R =3, compared with the predictive TCQs of [Marc90a] (MF), the

(57)

the proposed TCQsare superior tothoseof[Marc90a ]formemoryless

sources,the reasonfor therelatively lowerperformance forthe AR(1)

source could be the optimization algorithm that was used, which is

di erentfromthat of [Marc90a]. In [Marc90a ], it isassumedthat the

predictionresidualsareGaussianand,consequently,ascaledversionof

the optimal codebookfor the Gaussian sourceis used toquantize the

residuals. Nevertheless,our codebook should converge to acodebook

adaptedtotheresiduals'PDFandagainourTCQsshouldoutperform

those of [Marc90a ]. The reasonthis isnot the case is that(as our

ex-periments show) the codebook update, Step 2 in Figure 3.2 (which

is the same as Step 2 in Figure 3.1), can actually decrease the

per-formance. In our experiments, this occurred more often as the rate

decreased.

Another explanation for the lower performance is the fact that,

due to the quantization errors, the prediction error signal is

non-white[Jaya84 ],con ictingwith theassumptionofmemorylesssources

in the design of the proposed TCQs.

3.6 The M-Algorithm

The main disadvantage of the Viterbi algorithm is the fact that its

complexity increases exponentially with the constraint length, .To

circumvent this problem, a (suboptimal) reduced-state search

algo-rithmcan beapplied. Onesuchalgorithm isthe M-algorithm [Jeli71 ,

Jaya84 , Gers92]. An ecient implementation of this algorithm has

been found [Vinc88a ,Vinc88b, Vinc90].

ContrarytotheViterbialgorithm, whichfollowsq 

pathsthroughthe

trellis(i.e.one perstate),theM-algorithmonlyfollowsM q 

paths

through the trellis. At each step, the M paths are extended to qM

paths and of these paths only the M best ones (i.e. the ones having

(58)

Source

R  Laplacian Gaussian Uniform

1 6 5.52 5.47 6.48 7 5.59 5.52 6.52 8 5.61 5.49 6.52 2 6 11.26 11.28 12.88 7 11.28 11.32 12.91 8 11.27 11.32 12.92 1/2 6 2.88 2.71 3.05 7 2.91 2.74 3.07 8 2.93 2.74 3.10

Table 3.12: Experimental SNRs (in dB) using the M-algorithm with

M=64forq=2construction-BTWCs,TCQs,andTCVQs(N =2), at

R=1, R=2, andR =1=2,respectively, for the Laplacian, Gaussian,

anduniformsources.

Forq =2,variousconstruction-B TWCs,TCQs,andTCVQs(N =2)

at R =1, R = 2, and R =1=2, respectively, have been designed for

the Laplacian, Gaussian, and uniform sources using the M-algorithm

with M =2 k

, k =0;1;:::;9. Training sequences of 800 000 i.i.d.

samples were used and the performances were measured outside the

training set. Itwasfound that the quantizer performance mainly is a

function of M only. Increasing the number of states, 2 

, results only

inaninsigni cantperformanceimprovement. Toillustrate thetypical

behaviour, Table 3.12 lists the obtained performances for M = 64,

as a function of .For  = 6 the Viterbi algorithm performance is

obtained.

3.7 Discussion

TheobservationthattheproposedTCQshaveperformancesequal(for

(59)

The di erences between the proposed TCQs and those of [Marc90a ]

are thattheyarebasedon di erentconvolutional codesand thatthey

useadi erentnumberof(di erent)representationsymbols. Wedonot

knowwhethertheTCQconstructionof[Marc90a]generallyguarantees

a white spectrum.

The di erent convolutional codes probably do not account for the

performance di erences: the di erent constructions, A, B, and C,

presented in this thesis have about the same performances. Also,

in[Marc90a ], asearchwasperformedto nd convolutional codes with

better performances thanUngerboeck's codes, but littleimprovement

was obtained.

The di erence inthe numberof di erent representationsymbols

pro-vides a better explanation for the performance gain. As shown

in [Eyub93], the gain of a TCQ over a uniform scalar quantizer can

be separated (asymptotically, at high rates) into two components:

the granular gain and the boundary gain. The granular gain arises

fromamore ecientlocalspacecovering. Forthe mean-squarederror

distortion measure, used in this thesis, the granular gain is at most

0.255b/sample. The boundarygainarisesfromamoreecientglobal

space covering,i.e. it iscausedby the ability of the TCQto adaptits

representation symboldensity to the source density. Whereas for the

uniformsourcethereisnoboundarygain,fornon-uniform sourcesthe

boundary gain can bemuchlarger than the granular gain.

In [Marc90a ], forthe Gaussian andLaplacian sources,respectively,at

most4and8di erentsetsofrepresentationsymbolsareused,whereas

theproposedconstructionsuseq 

di erentsetsofrepresentation

sym-bols fora q 

-state TCQ.Since the proposed TCQsuse moredi erent

representation symbols, they are better able to adapt to the source

density. The conjecture that the gain of the proposed TCQs over

those of [Marc90a ] is attributable to the boundary gain is supported

by the observation that, for the uniform source, the proposed TCQs

do not provide a gain over those of [Marc90a ]. It is also supported

(60)

States Src R 8 16 32 64 128 256 512 LM RD Lap 1 0.94 0.96 0.96 0.97 0.98 0.98 0.98 1.00 1.00 2 1.87 1.91 1.92 1.95 1.95 1.96 1.95 1.72 2.00 3 2.84 2.88 2.91 2.92 2.93 2.92 | 2.57 3.00 Gau 1 0.99 0.99 0.99 1.00 0.99 0.99 0.99 1.00 1.00 2 1.96 1.97 1.98 1.97 1.98 1.98 1.97 1.91 2.00 3 2.94 2.95 2.96 2.96 2.96 2.94 | 2.82 3.00

Table 3.13: Entropies of the proposed construction-B TCQs

com-pared with the Lloyd-Max quantizer (LM) and rate distortion

the-ory (RD) values, for the Laplacian and Gaussian sources at R =1,

R=2,and R=3.

representationsymbolsare selectedwith equalprobability. Table3.13

lists the entropiesof the construction-B TCQs,for theLaplacian and

Gaussian sources,as a function of the rate and the number of states.

The entropies increase with the number of states and it can be seen

that it is a good approximation to assume that all branches are

se-lected with equal probability, for the proposed TCQs.

The better the representation-symbol density of the TCQ matches

the source density, i.e. the higher the boundary gain, the more all

representation symbols will be used with equal probability. Thus,

the entropy indicates how wellthe TCQ exploits the boundary gain.

In [Marc90a ] the entropies were not determined. We conjecturethat

they are lower than the entropies for the proposed TCQs.

3.8 Conclusions

In the experimentsfor the memorylessGaussian, Laplacian, and

uni-form sources, at the same rate and complexity, the proposed TCQs

(61)

com-as q is increased, even though the number of representation symbols

decreases with q. The bestTCVQ performance isobtained by

simul-taneously increasing N and q, ata constant rate and complexity.

For the memoryless Gaussian and Laplacian sources, the proposed

TCQs at1, 2, and 3b/sample improve upon all previously published

results (as listed in [Marc90a]). For the uniform source, the

perfor-mance equals that of [Marc90a ]. We conjecturethat the gains of the

proposed TCQs over those of [Marc90a] are attributable to a higher

(62)
(63)

Rate Distortion Theory for

Trellis Waveform Coding

4.1 Introduction

The discipline ofinformation theory that treatsquantization is called

ratedistortiontheory. Itsmain ndingisthatthereisafunctionR(D),

the rate distortion function,whichspeci es the e ectiverate R ofthe

sourcewhenitsoutputsmustbereproducedwithanaveragedistortion

of no more than D. The foundations of rate distortion theory were

laid by Shannon,in1948 [Shan48a,Shan48b] and1959 [Shan59]. The

book on the same subject, written by Berger [Berg71] in 1971, has

become a classic.

As an extension to the traditional rate distortion theory for

continuous-amplitude sources with a continuous representation and

discrete-amplitude sourceswith discrete representations,Pearlman et

al. developed a rate distortion theory for memoryless

continuous-amplitude sources with discrete representations, which they call

Cytaty

Powiązane dokumenty

Deletion strains DS63171 (hdfAD PcvelAD) and DS67261 (hdfAD PclaeAD) did not showed significant changes in penicillin biosynthesic gene cluster number relative to DS54465 and

This report focusses on the policy of the European Union towards stimulating intermodal transportation, the developments in infrastructure like construction and standardization

Figure 2.4: Probability distribution over the position space of finding the walker at given position after 100 steps of discrete time quantum walk on an infinite line with Hadamard

In fact, the process of motion estimation/compensation of a video frame at the decoder can be based on more complex and possibly locally more accurate motion models than at the

On the other hand, in each solution we used a different type of video compression and a different type of cooperation among video coder network adaptation layer (NAL) and

Here, we report on additional cases in which there is an asymptotic performance gain over the Ungerboeck type TCM schemes, which we obtained by an automatic

Results for the SOVA algorithm, N = 512, AWGN channel (left) and two-path channel (right)... Comparison of results for the MAP algorithm (left) and for the SOVA

Do „czułych&#34; w odbiorze społecznym aspektów medycyny transplan- tacyjnej należą określenie – kogo można nazwać dawcą i problem, czyja zgoda jest potrzebna na