Distributed Source Coding -  Pier Luigi Dragotti,  Michael Gastpar

Distributed Source Coding (eBook)

Theory, Algorithms and Applications
eBook Download: PDF | EPUB
2009 | 1. Auflage
360 Seiten
Elsevier Science (Verlag)
978-0-08-092274-4 (ISBN)
Systemvoraussetzungen
Systemvoraussetzungen
108,00 inkl. MwSt
  • Download sofort lieferbar
  • Zahlungsarten anzeigen
The advent of wireless sensor technology and ad-hoc networks has made DSC a major field of interest. Edited and written by the leading players in the field, this book presents the latest theory, algorithms and applications, making it the definitive reference on DSC for systems designers and implementers, researchers, and graduate students.

This book gives a clear understanding of the performance limits of distributed source coders for specific classes of sources and presents the design and application of practical algorithms for realistic scenarios. Material covered includes the use of standard channel codes, such as LDPC and Turbo codes, to DSC, and discussion of the suitability of compressed sensing for distributed compression of sparse signals. Extensive applications are presented and include distributed video coding, microphone arrays and securing biometric data.

This book is a great resource covering the breadth and depth of distributed source coding that's appropriate for everyone from theoreticians to practitioners. - Richard Baraniuk, Rice University

*Clear explanation of the principles of distributed source coding (DSC), a technology that has applications in sensor networks, ad-hoc networks, and distributed wireless video systems for surveillance
*Edited and written by the leading players in the field, providing a complete and authoritative reference
*Contains all the latest theory, practical algorithms for DSC design and the most recently developed applications
The advent of wireless sensor technology and ad-hoc networks has made DSC a major field of interest. Edited and written by the leading players in the field, this book presents the latest theory, algorithms and applications, making it the definitive reference on DSC for systems designers and implementers, researchers, and graduate students. This book gives a clear understanding of the performance limits of distributed source coders for specific classes of sources and presents the design and application of practical algorithms for realistic scenarios. Material covered includes the use of standard channel codes, such as LDPC and Turbo codes, to DSC, and discussion of the suitability of compressed sensing for distributed compression of sparse signals. Extensive applications are presented and include distributed video coding, microphone arrays and securing biometric data. Clear explanation of the principles of distributed source coding (DSC), a technology that has applications in sensor networks, ad-hoc networks, and distributed wireless video systems for surveillance Edited and written by the leading players in the field, providing a complete and authoritative reference Contains all the latest theory, practical algorithms for DSC design and the most recently developed applications

Front Cover 1
Distributed Source Coding 4
Copyright Page 5
Table of Contents 6
List of Contributors 14
Introduction 20
Part I: Theory 22
Chapter 1. Foundations of Distributed Source Coding 24
1.1 Introduction 25
1.2 Centralized Source Coding 25
1.2.1 Lossless Source Coding 25
1.2.2 Lossy Source Coding 26
1.2.3 Lossy Source Coding for Sources with Memory 29
1.2.4 Some Notes on Practical Considerations 30
1.3 Distributed Source Coding 30
1.3.1 Lossless Source Coding 31
1.3.2 Lossy Source Coding 32
1.3.3 Interaction 36
1.4 Remote Source Coding 37
1.4.1 Centralized 37
1.4.2 Distributed: The CEO Problem 40
1.5 Joint Source-channel Coding 43
Acknowledgments 44
Appendix A: Formal Definitions and Notations 44
A.1 Notation 44
A.1.1 Centralized Source Coding 46
A.1.2 Distributed Source Coding 47
A.1.3 Remote Source Coding 48
References 49
Chapter 2. Distributed Transform Coding 54
2.1 Introduction 54
2.2 Foundations of Centralized Transform Coding 56
2.2.1 Transform Coding Overview 56
2.2.2 Lossless Compression 57
2.2.3 Quantizers 58
2.2.4 Bit Allocation 59
2.2.5 Transforms 60
2.2.6 Linear Approximation 62
2.3 The Distributed Karhunen--Loève Transform 63
2.3.1 Problem Statement and Notation 64
2.3.2 The Two-terminal Scenario 65
2.3.3 The Multiterminal Scenario and the Distributed KLT Algorithm 70
2.4 Alternative Transforms 70
2.4.1 Practical Distributed Transform Coding with Side Information 71
2.4.2 High-rate Analysis of Source Coding with Side Informationat Decoder 71
2.5 New Approaches to Distributed Compression with FRI 72
2.5.1 Background on Sampling of 2D FRI Signals 73
2.5.2 Detailed Example: Coding Scheme for Translatinga Bi-level Polygon 74
2.6 Conclusions 79
References 79
Chapter 3. Quantization for Distributed Source Coding 82
3.1 Introduction 83
3.2 Formulation of the Problem 85
3.2.1 Conventions 85
3.2.2 Network Distributed Source Coding 86
3.2.3 Cost, Distortion, and Rate Measures 87
3.2.4 Optimal Quantizers and Reconstruction Functions 88
3.2.5 Example: Quantization of Side Information 88
3.3 Optimal Quantizer Design 89
3.3.1 Optimality Conditions 89
3.3.2 Lloyd Algorithm for Distributed Quantization 90
3.4 Experimental Results 91
3.5 High-rate Distributed Quantization 94
3.5.1 High-rate WZ Quantization of Clean Sources 95
3.5.2 High-rate WZ Quantization of Noisy Sources 97
3.5.3 High-rate Network Distributed Quantization 101
3.6 Experimental Results Revisited 105
3.7 Conclusions 106
References 107
Chapter 4. Zero-error Distributed Source Coding 110
4.1 Introduction 110
4.2 Graph Theoretic Connections 113
4.2.1 VLZE Coding and Graphs 113
4.2.2 Basic Definitions and Notation 116
4.2.3 Graph Entropies 117
4.2.4 Graph Capacity 119
4.3 Complementary Graph Entropy and VLZE Coding 119
4.4 Network Extensions 121
4.4.1 Extension 1: VLZE Coding When Side Information May Be Absent 121
4.4.2 Extension 2: VLZE Coding with Compound Side Information 123
4.5 VLZE Code Design 125
4.5.1 Hardness of Optimal Code Design 125
4.5.2 Hardness of Coding with Length Constraints 128
4.5.3 An Exponential-time Optimal VLZE Code Design Algorithm 129
4.6 Conclusions 130
References 131
Chapter 5. Distributed Coding of Sparse Signals 132
5.1 Introduction 132
5.1.1 Sparse Signals 133
5.1.2 Signal Recovery with Compressive Sampling 134
5.2 Compressive Sampling as Distributed Source Coding 135
5.2.1 Modeling Assumptions 137
5.2.2 Analyses 138
5.2.3 Numerical Simulation 142
5.3 Information Theory to the Rescue? 144
5.4 Conclusions—Whither Compressive Sampling? 146
Appendix 146
5.5 Quantizer Performance and Quantization Error 146
Acknowledgments 147
References 147
Part II: Algorithms and Applications 150
Chapter 6. Toward Constructive Slepian–Wolf Coding Schemes 152
6.1 Introduction 152
6.2 Asymmetric SW Coding 153
6.2.1 Principle of Asymmetric SW Coding 153
6.2.2 Practical Code Design Based on Channel Codes 157
6.2.3 Rate Adaptation 160
6.3 Nonasymmetric SW Coding 164
6.3.1 Time Sharing 164
6.3.2 The Parity Approach 164
6.3.3 The Syndrome Approach 165
6.3.4 Source Splitting 169
6.3.5 Rate Adaptation 170
6.4 Advanced Topics 172
6.4.1 Practical Code Design Based on Source Codes 172
6.4.2 Generalization to Nonbinary Sources 174
6.4.3 Generalization to M Sources 174
6.5 Conclusions 174
References 175
Chapter 7. Distributed Compression in Microphone Arrays 178
7.1 Introduction 179
7.2 Spatiotemporal Evolution of the Sound Field 180
7.2.1 Recording Setups 180
7.2.2 Spectral Characteristics 184
7.2.3 Spatiotemporal Sampling and Reconstruction 185
7.3 Huygens's Configuration 190
7.3.1 Setup 190
7.3.2 Coding Strategies 191
7.3.3 Rate-distortion Trade-offs 192
7.4 Binaural Hearing Aid Configuration 198
7.4.1 Setup 198
7.4.2 Coding Strategies 200
7.4.3 Rate-distortion Trade-offs 201
7.5 Conclusions 206
Acknowledgment 207
References 207
Chapter 8. Distributed Video Coding: Basics, Codecs, and Performance 210
8.1 Introduction 211
8.2 Basics on Distributed Video Coding 213
8.3 The Early Wyner--Ziv Video Coding Architectures 216
8.3.1 The Stanford WZ Video Codec 216
8.3.2 The Berkeley WZ Video Codec 219
8.3.3 Comparing the Early WZ Video Codecs 221
8.4 Further Developments on Wyner–Ziv Video Coding 222
8.4.1 Improving RD Performance 222
8.4.2 Removing the Feedback Channel 225
8.4.3 Improving Error Resilience 226
8.4.4 Providing Scalability 227
8.5 The DISCOVER Wyner–Ziv Video Codec 228
8.5.1 Transform and Quantization 231
8.5.2 Slepian–Wolf Coding 232
8.5.3 Side Information Creation 234
8.5.4 Correlation Noise Modeling 235
8.5.5 Reconstruction 236
8.6 The DISCOVER Codec Performance 237
8.6.1 Performance Evaluation Conditions 237
8.6.2 RD Performance Evaluation 240
8.6.3 Complexity Performance Evaluation 253
8.7 Final Remarks 262
Acknowledgments 263
References 263
Chapter 9. Model-based Multiview Video Compression Using Distributed Source Coding Principles 268
9.1 Introduction 268
9.2 Model Tracking 270
9.2.1 Image Appearance Model of a Rigid Object 271
9.2.2 Inverse Compositional Estimation of 3D Motion and Illumination 272
9.3 Distributed Compression Schemes 275
9.3.1 Feature Extraction and Coding 276
9.3.2 Types of Frames 277
9.3.3 Types of Side Information 278
9.4 Experimental Results 279
9.5 Conclusions 284
References 287
Chapter 10. Distributed Compression of Hyperspectral Imagery 290
10.1 Introduction 290
10.1.1 Hyperspectral Imagery Compression: State of the Art 292
10.1.2 Outline of This Chapter 294
10.2 Hyperspectral Image Compression 294
10.2.1 Dataset Characteristics 294
10.2.2 Intraband Redundancy and Cross-band Correlation 295
10.2.3 Limitations of Existing Hyperspectral Compression Techniques 296
10.3 DSC-based Hyperspectral Image Compression 298
10.3.1 Potential Advantages of DSC-based Hyperspectral Compression 299
10.3.2 Challenges in Applying DSC for Hyperspectral Imaging 300
10.4 Example Designs 301
10.4.1 DSC Techniques for Lossless Compression of HyperspectralImages 301
10.4.2 Wavelet-based Slepian–Wolf Coding for Lossy-to-lossless Compression of Hyperspectral Images 304
10.4.3 Distributed Compression of Multispectral Images Using a Set Theoretic Approach 309
10.5 Conclusions 310
References 310
Chapter 11. Securing Biometric Data 314
11.1 Introduction 315
11.1.1 Motivation and Objectives 315
11.1.2 Architectures and System Security 316
11.1.3 Chapter Organization 317
11.2 Related Work 317
11.3 Overview of Secure Biometrics Using Syndromes 320
11.3.1 Notation 320
11.3.2 Enrollment and Authentication 320
11.3.3 Performance Measures: Security and Robustness 321
11.3.4 Quantifying Security 323
11.3.5 Implementation Using Syndrome Coding 326
11.4 Iris System 327
11.4.1 Enrollment and Authentication 327
11.4.2 Experimental Results 328
11.5 Fingerprint System: Modeling Approach 330
11.5.1 Minutiae Representation of Fingerprints 330
11.5.2 Modeling the Movement of Fingerprint Minutiae 331
11.5.3 Experimental Evaluation of Security and Robustness 334
11.5.4 Remarks on the Modeling Approach 336
11.6 Fingerprint System: Transformation Approach 336
11.6.1 Desired Statistical Properties of Feature Vectors 337
11.6.2 Feature Transformation Algorithm 338
11.6.3 Experimental Evaluation of Security and Robustness 339
11.7 Summary 343
References 344
Index 346

CHAPTER 2

Distributed Transform Coding

Varit Chaisinthop and Pier Luigi Dragotti, Department of Electrical and Electronic Engineering, Imperial College London, UK

2.1 INTRODUCTION


Compression or approximation of an observed source is a central problem in signal processing and communications, and transform coding has over the years emerged as the dominating compression strategy. This is because transform coders are normally very efficient and computationally simple. Transforms are used in most of the compression standards, including image compression standards such as JPEG and JPEG 2000 ([20, 34]) and video compression standards such as H.26x [31].

Transform coding has been widely studied in recent years, and many important results and optimality conditions have been derived. For example, it is well known that, in some particular settings, the Karhunen–Loève transform (KLT) is the optimal transform for compression or linear approximation [15, 18]. More recently, the analysis of popular transform coders has led to new insights into the general problem of source approximation and to new interesting connections between compression and nonlinear approximation. This analysis has also clarified why the wavelet transform (WT) is the best transform in some particular situations [4, 21, 32].

In distributed source coding, however, the source is not available at a single location and thus it is not possible to apply a single transform to the entire source. Instead, the source is partially observed by separate encoders that have to devise a local compression strategy. It is thus natural to study how the classical centralized transform coding paradigm changes in this new context. For example, if each encoder performs transform coding locally, should the local transform be different from the one used in a centralized compression? And how are the other modules of a transform coder going to change? Is quantization, bit allocation, and entropy coding going to be different?

Distributed transform coding has been analyzed in recent years, and some answers can be provided. When the source is Gaussian, the KLT emerges again as the best (and in some cases optimal) transform, even though changes to the classical structure of the transform have to be implemented. This has been studied in [912], and the asymptotic behavior of the distributed KLT has been considered in [28, 29]. A similar problem has also been studied in [22, 37]. In the high bit-rate regime, some optimality conditions for transforms have also been proved [8, 27].

The problem of distributed transform coding remains widely open, however, when the Gaussianity of the source and the high bit-rate assumptions are relaxed. The design of distributed transform coders in this particular case is normally based on heuristics, and usually the local transform is not different from the one used in the centralized scenario. Most of the modifications are done in the quantization and entropy coding stages. For example, in the context of distributed video coding, the structure of the discrete cosine transform (DCT) is the same as in classical video coding. However, the DCT coefficients are quantized in a different way in order to exploit the fact that correlated information is available at the decoder. This strategy, though effective, is not necessarily optimal.

In the next section, we review the main results on centralized transform coding. Section 2.3 provides an overview of the main results that appeared in [12] where extensions of the KLT to the distributed scenario were considered. Sections 2.4 and 2.5 address the case where the input source is not Gaussian. The high bit-rate regime is discussed, and extensions of DCT and WT are presented. We conclude in Section 2.6.

2.2 FOUNDATIONS OF CENTRALIZED TRANSFORM CODING


2.2.1 Transform Coding Overview

The problem of compression or source coding can be divided into two types: unconstrained and constrained. In an unconstrained source coding problem, the encoder has access to the entire source vector . It is therefore possible to devise an optimal source code for a particular source vector. In practice, however, it is hardly the case that the entire source vector can be observed. In addition, even though we can achieve good compression results, the high complexity of an unconstrained source coder makes it impractical to implement.

A typical compression scheme is shown in Figure 2.1. A standard compressor consists of three independent blocks: a block implementing linear transform, a quantizer, and a lossless entropy encoder. This type of structure is called transform coding. A transform code is an example of a constrained source code. Constrained source codes are, loosely speaking, suboptimal but low in complexity, which arises from the modularization of the encoding process. This approach allows “simple coding” to be used with high efficiency. Simple coding refers to the use of scalar quantizer and scalar entropy coding. This is one of the main reasons transform code is the most widely used source code today. Given a random vector x of size N, the simplicity of the transform code allows x with a large value of N to be encoded.

FIGURE 2.1 Transform coding. A typical compression scheme consists of three elements: linear transform, quantization, and lossless compression (entropy coding).

The task of an encoder is to map the source vector to a bitstream of finite length. An invertible linear transform produces decorrelated transform coefficients y = Tx where . The quantizer then maps to a discrete set of indices I, which produce estimates of the coefficients . This is then followed by a lossless entropy encoder that performs a reversible mapping from I to a bit stream.

The decoder essentially reverses the operation of the encoder to reconstruct the approximation of the source. Since entropy coding is a lossless process, the quantizer indices can be recovered exactly from the bitstream to give the estimates of transform coefficients ŷ. A linear transform, generally denoted as U, is then applied to ŷ to produce the approximation of the source = U ŷ where, as often is the case, U = T−1.

The quality of a lossy encoder is normally measured by the mean-squared error (MSE) distortion of the approximation . The MSE distortion is denoted by D = E[d(x, )] where

and E[·] is the expectation operator. In order to gauge the performance of a source code, the distortion is normally measured against the rate R, which is the expected number of bits produced by the encoder divided by the length N of the encoded source vector x. This is normally termed the distortion-rate performance. Thus the performance of one source code is said to be better than the other in a rate-distortion sense if, at the same given rate R, the earlier can achieve a lower distortion than the latter.

We will now look at each building block of a transform coder in more detail.

2.2.2 Lossless Compression


Entropy code is a form of lossless compression, which is reversible and therefore can only be applied to discrete sources. Consider a discrete source X that produces I different values ai and the probability of occurrence of each value is denoted by p(ai). An entropy code assigns a unique binary representation b(ai) or a codeword to each ai. The goal is to find a codeword b(ai) for each symbol so that the expected length of...

Erscheint lt. Verlag 24.2.2009
Sprache englisch
Themenwelt Sachbuch/Ratgeber
Informatik Theorie / Studium Kryptologie
Naturwissenschaften Physik / Astronomie Elektrodynamik
Technik Elektrotechnik / Energietechnik
Technik Nachrichtentechnik
ISBN-10 0-08-092274-0 / 0080922740
ISBN-13 978-0-08-092274-4 / 9780080922744
Haben Sie eine Frage zum Produkt?
PDFPDF (Adobe DRM)
Größe: 5,4 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: PDF (Portable Document Format)
Mit einem festen Seiten­layout eignet sich die PDF besonders für Fach­bücher mit Spalten, Tabellen und Abbild­ungen. Eine PDF kann auf fast allen Geräten ange­zeigt werden, ist aber für kleine Displays (Smart­phone, eReader) nur einge­schränkt geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

EPUBEPUB (Adobe DRM)
Größe: 8,8 MB

Kopierschutz: Adobe-DRM
Adobe-DRM ist ein Kopierschutz, der das eBook vor Mißbrauch schützen soll. Dabei wird das eBook bereits beim Download auf Ihre persönliche Adobe-ID autorisiert. Lesen können Sie das eBook dann nur auf den Geräten, welche ebenfalls auf Ihre Adobe-ID registriert sind.
Details zum Adobe-DRM

Dateiformat: EPUB (Electronic Publication)
EPUB ist ein offener Standard für eBooks und eignet sich besonders zur Darstellung von Belle­tristik und Sach­büchern. Der Fließ­text wird dynamisch an die Display- und Schrift­größe ange­passt. Auch für mobile Lese­geräte ist EPUB daher gut geeignet.

Systemvoraussetzungen:
PC/Mac: Mit einem PC oder Mac können Sie dieses eBook lesen. Sie benötigen eine Adobe-ID und die Software Adobe Digital Editions (kostenlos). Von der Benutzung der OverDrive Media Console raten wir Ihnen ab. Erfahrungsgemäß treten hier gehäuft Probleme mit dem Adobe DRM auf.
eReader: Dieses eBook kann mit (fast) allen eBook-Readern gelesen werden. Mit dem amazon-Kindle ist es aber nicht kompatibel.
Smartphone/Tablet: Egal ob Apple oder Android, dieses eBook können Sie lesen. Sie benötigen eine Adobe-ID sowie eine kostenlose App.
Geräteliste und zusätzliche Hinweise

Zusätzliches Feature: Online Lesen
Dieses eBook können Sie zusätzlich zum Download auch online im Webbrowser lesen.

Buying eBooks from abroad
For tax law reasons we can sell eBooks just within Germany and Switzerland. Regrettably we cannot fulfill eBook-orders from other countries.

Mehr entdecken
aus dem Bereich