PERHAPS A GIFT VOUCHER FOR MUM?: MOTHER'S DAY

Close Notification

Your cart does not contain any items

$196.95

Hardback

Not in-store but you can order this
How long will it take?

QTY:

English
Wiley-IEEE Press
23 February 2024
Series: IEEE Press
DIGITAL SPEECH TRANSMISSION AND ENHANCEMENT Enables readers to understand the latest developments in speech enhancement/transmission due to advances in computational

power and device miniaturization

The Second Edition of Digital Speech Transmission and Enhancement has been updated throughout to provide all the necessary details on the latest advances in the theory and practice in speech signal processing and its applications, including many new research results, standards, algorithms, and developments which have recently appeared and are on their way into state-of-the-art applications.

Besides mobile communications, which constituted the main application domain of the first edition, speech enhancement for hearing instruments and man-machine interfaces has gained significantly more prominence in the past decade, and as such receives greater focus in this updated and expanded second edition.

Readers can expect to find information and novel methods on:

Low-latency spectral analysis-synthesis, single-channel and dual-channel algorithms for noise reduction and dereverberation Multi-microphone processing methods, which are now widely used in applications such as mobile phones, hearing aids, and man-computer interfaces Algorithms for near-end listening enhancement, which provide a significantly increased speech intelligibility for users at the noisy receiving side of their mobile phone Fundamentals of speech signal processing, estimation and machine learning, speech coding, error concealment by soft decoding, and artificial bandwidth extension of speech signals

Digital Speech Transmission and Enhancement is a single-source, comprehensive guide to the fundamental issues, algorithms, standards, and trends in speech signal processing and speech communication technology, and as such is an invaluable resource for engineers, researchers, academics, and graduate students in the areas of communications, electrical engineering, and information technology.

By:   , ,
Imprint:   Wiley-IEEE Press
Country of Publication:   United States
Edition:   2nd edition
Dimensions:   Height: 244mm,  Width: 170mm,  Spine: 37mm
Weight:   1.134kg
ISBN:   9781119060963
ISBN 10:   1119060966
Series:   IEEE Press
Pages:   592
Publication Date:  
Audience:   Professional and scholarly ,  Undergraduate
Format:   Hardback
Publisher's Status:   Active
Preface xv 1 Introduction 1 2 Models of Speech Production and Hearing 5 2.1 Sound Waves 5 2.2 Organs of Speech Production 7 2.3 Characteristics of Speech Signals 9 2.4 Model of Speech Production 10 2.4.1 Acoustic Tube Model of the Vocal Tract 12 2.4.2 Discrete Time All-Pole Model of the Vocal Tract 19 2.5 Anatomy of Hearing 25 2.6 Psychoacoustic Properties of the Auditory System 27 2.6.1 Hearing and Loudness 27 2.6.2 Spectral Resolution 29 2.6.3 Masking 31 2.6.4 Spatial Hearing 32 2.6.4.1 Head-Related Impulse Responses and Transfer Functions 33 2.6.4.2 Law of The First Wavefront 34 References 35 3 Spectral Transformations 37 3.1 Fourier Transform of Continuous Signals 37 3.2 Fourier Transform of Discrete Signals 38 3.3 Linear Shift Invariant Systems 41 3.3.1 Frequency Response of LSI Systems 42 3.4 The z-transform 42 3.4.1 Relation to Fourier Transform 43 3.4.2 Properties of the ROC 44 3.4.3 Inverse z-Transform 44 3.4.4 z-Transform Analysis of LSI Systems 46 3.5 The Discrete Fourier Transform 47 3.5.1 Linear and Cyclic Convolution 48 3.5.2 The DFT of Windowed Sequences 51 3.5.3 Spectral Resolution and Zero Padding 54 3.5.4 The Spectrogram 55 3.5.5 Fast Computation of the DFT: The FFT 56 3.5.6 Radix-2 Decimation-in-Time FFT 57 3.6 Fast Convolution 60 3.6.1 Fast Convolution of Long Sequences 60 3.6.2 Fast Convolution by Overlap-Add 61 3.6.3 Fast Convolution by Overlap-Save 61 3.7 Analysis–Modification–Synthesis Systems 64 3.8 Cepstral Analysis 66 3.8.1 Complex Cepstrum 67 3.8.2 Real Cepstrum 69 3.8.3 Applications of the Cepstrum 70 3.8.3.1 Construction of Minimum-Phase Sequences 70 3.8.3.2 Deconvolution by Cepstral Mean Subtraction 71 3.8.3.3 Computation of the Spectral Distortion Measure 72 3.8.3.4 Fundamental Frequency Estimation 73 References 75 4 Filter Banks for Spectral Analysis and Synthesis 79 4.1 Spectral Analysis Using Narrowband Filters 79 4.1.1 Short-Term Spectral Analyzer 83 4.1.2 Prototype Filter Design for the Analysis Filter Bank 86 4.1.3 Short-Term Spectral Synthesizer 87 4.1.4 Short-Term Spectral Analysis and Synthesis 88 4.1.5 Prototype Filter Design for the Analysis–Synthesis filter bank 90 4.1.6 Filter Bank Interpretation of the DFT 92 4.2 Polyphase Network Filter Banks 94 4.2.1 PPN Analysis Filter Bank 95 4.2.2 PPN Synthesis Filter Bank 101 4.3 Quadrature Mirror Filter Banks 104 4.3.1 Analysis–Synthesis Filter Bank 104 4.3.2 Compensation of Aliasing and Signal Reconstruction 106 4.3.3 Efficient Implementation 109 4.4 Filter Bank Equalizer 112 4.4.1 The Reference Filter Bank 112 4.4.2 Uniform Frequency Resolution 113 4.4.3 Adaptive Filter Bank Equalizer: Gain Computation 117 4.4.3.1 Conventional Spectral Subtraction 117 4.4.3.2 Filter Bank Equalizer 118 4.4.4 Non-uniform Frequency Resolution 120 4.4.5 Design Aspects & Implementation 122 References 123 5 Stochastic Signals and Estimation 127 5.1 Basic Concepts 127 5.1.1 Random Events and Probability 127 5.1.2 Conditional Probabilities 128 5.1.3 Random Variables 129 5.1.4 Probability Distributions and Probability Density Functions 129 5.1.5 Conditional PDFs 130 5.2 Expectations and Moments 130 5.2.1 Conditional Expectations and Moments 131 5.2.2 Examples 131 5.2.2.1 The Uniform Distribution 132 5.2.2.2 The Gaussian Density 132 5.2.2.3 The Exponential Density 132 5.2.2.4 The Laplace Density 133 5.2.2.5 The Gamma Density 134 5.2.2.6 χ2-Distribution 134 5.2.3 Transformation of a Random Variable 135 5.2.4 Relative Frequencies and Histograms 136 5.3 Bivariate Statistics 137 5.3.1 Marginal Densities 137 5.3.2 Expectations and Moments 137 5.3.3 Uncorrelatedness and Statistical Independence 138 5.3.4 Examples of Bivariate PDFs 139 5.3.4.1 The Bivariate Uniform Density 139 5.3.4.2 The Bivariate Gaussian Density 139 5.3.5 Functions of Two Random Variables 140 5.4 Probability and Information 141 5.4.1 Entropy 141 5.4.2 Kullback–Leibler Divergence 141 5.4.3 Cross-Entropy 142 5.4.4 Mutual Information 142 5.5 Multivariate Statistics 142 5.5.1 Multivariate Gaussian Distribution 143 5.5.2 Gaussian Mixture Models 144 5.6 Stochastic Processes 145 5.6.1 Stationary Processes 145 5.6.2 Auto-Correlation and Auto-Covariance Functions 146 5.6.3 Cross-Correlation and Cross-Covariance Functions 147 5.6.4 Markov Processes 147 5.6.5 Multivariate Stochastic Processes 148 5.7 Estimation of Statistical Quantities by Time Averages 150 5.7.1 Ergodic Processes 150 5.7.2 Short-Time Stationary Processes 150 5.8 Power Spectrum and its Estimation 151 5.8.1 White Noise 152 5.8.2 The Periodogram 152 5.8.3 Smoothed Periodograms 153 5.8.3.1 Non Recursive Smoothing in Time 153 5.8.3.2 Recursive Smoothing in Time 154 5.8.3.3 Log-Mel Filter Bank Features 154 5.8.4 Power Spectra and Linear Shift-Invariant Systems 156 5.9 Statistical Properties of Speech Signals 157 5.10 Statistical Properties of DFT Coefficients 157 5.10.1 Asymptotic Statistical Properties 158 5.10.2 Signal-Plus-Noise Model 159 5.10.3 Statistics of DFT Coefficients for Finite Frame Lengths 160 5.11 Optimal Estimation 162 5.11.1 MMSE Estimation 163 5.11.2 Estimation of Discrete Random Variables 164 5.11.3 Optimal Linear Estimator 164 5.11.4 The Gaussian Case 165 5.11.5 Joint Detection and Estimation 166 5.12 Non-Linear Estimation with Deep Neural Networks 167 5.12.1 Basic Network Components 168 5.12.1.1 The Perceptron 168 5.12.1.2 Convolutional Neural Network 170 5.12.2 Basic DNN Structures 170 5.12.2.1 Fully-Connected Feed-Forward Network 171 5.12.2.2 Autoencoder Networks 171 5.12.2.3 Recurrent Neural Networks 172 5.12.2.4 Time Delay, Wavenet, and Transformer Networks 175 5.12.2.5 Training of Neural Networks 175 5.12.2.6 Stochastic Gradient Descent (SGD) 176 5.12.2.7 Adaptive Moment Estimation Method (ADAM) 176 References 177 6 Linear Prediction 181 6.1 Vocal Tract Models and Short-Term Prediction 181 6.1.1 All-Zero Model 182 6.1.2 All-Pole Model 183 6.1.3 Pole-Zero Model 183 6.2 Optimal Prediction Coefficients for Stationary Signals 187 6.2.1 Optimum Prediction 187 6.2.2 Spectral Flatness Measure 190 6.3 Predictor Adaptation 192 6.3.1 Block-Oriented Adaptation 192 6.3.1.1 Auto-Correlation Method 193 6.3.1.2 Covariance Method 194 6.3.1.3 Levinson–Durbin Algorithm 196 6.3.2 Sequential Adaptation 201 6.4 Long-Term Prediction 204 References 209 7 Quantization 211 7.1 Analog Samples and Digital Representation 211 7.2 Uniform Quantization 212 7.3 Non-uniform Quantization 219 7.4 Optimal Quantization 227 7.5 Adaptive Quantization 228 7.6 Vector Quantization 232 7.6.1 Principle 232 7.6.2 The Complexity Problem 235 7.6.3 Lattice Quantization 236 7.6.4 Design of Optimal Vector Code Books 236 7.6.5 Gain–Shape Vector Quantization 239 7.7 Quantization of the Predictor Coefficients 240 7.7.1 Scalar Quantization of the LPC Coefficients 241 7.7.2 Scalar Quantization of the Reflection Coefficients 241 7.7.3 Scalar Quantization of the LSF Coefficients 243 References 246 8 Speech Coding 249 8.1 Speech-Coding Categories 249 8.2 Model-Based Predictive Coding 253 8.3 Linear Predictive Waveform Coding 255 8.3.1 First-Order DPCM 255 8.3.2 Open-Loop and Closed-Loop Prediction 258 8.3.3 Quantization of the Residual Signal 259 8.3.3.1 Quantization with Open-Loop Prediction 259 8.3.3.2 Quantization with Closed-Loop Prediction 261 8.3.3.3 Spectral Shaping of the Quantization Error 262 8.3.4 ADPCM with Sequential Adaptation 266 8.4 Parametric Coding 268 8.4.1 Vocoder Structures 268 8.4.2 LPC Vocoder 271 8.5 Hybrid Coding 272 8.5.1 Basic Codec Concepts 272 8.5.1.1 Scalar Quantization of the Residual Signal 274 8.5.1.2 Vector Quantization of the Residual Signal 276 8.5.2 Residual Signal Coding: RELP 279 8.5.3 Analysis by Synthesis: CELP 282 8.5.3.1 Principle 282 8.5.3.2 Fixed Code Book 283 8.5.3.3 Long-Term Prediction, Adaptive Code Book 287 8.6 Adaptive Postfiltering 289 8.7 Speech Codec Standards: Selected Examples 293 8.7.1 GSM Full-Rate Codec 295 8.7.2 EFR Codec 297 8.7.3 Adaptive Multi-Rate Narrowband Codec (AMR-NB) 299 8.7.4 ITU-T/G.722: 7 kHz Audio Coding within 64 kbit/s 301 8.7.5 Adaptive Multi-Rate Wideband Codec (AMR-WB) 301 8.7.6 Codec for Enhanced Voice Services (EVS) 303 8.7.7 Opus Codec IETF RFC 6716 306 References 307 9 Concealment of Erroneous or Lost Frames 313 9.1 Concepts for Error Concealment 314 9.1.1 Error Concealment by Hard Decision Decoding 315 9.1.2 Error Concealment by Soft Decision Decoding 316 9.1.3 Parameter Estimation 318 9.1.3.1 MAP Estimation 318 9.1.3.2 MS Estimation 318 9.1.4 The A Posteriori Probabilities 319 9.1.4.1 The A Priori Knowledge 320 9.1.4.2 The Parameter Distortion Probabilities 320 9.1.5 Example: Hard Decision vs. Soft Decision 321 9.2 Examples of Error Concealment Standards 323 9.2.1 Substitution and Muting of Lost Frames 323 9.2.2 AMR Codec: Substitution and Muting of Lost Frames 325 9.2.3 EVS Codec: Concealment of Lost Packets 329 9.3 Further Improvements 330 References 331 10 Bandwidth Extension of Speech Signals 335 10.1 BWE Concepts 337 10.2 BWE using the Model of Speech Production 339 10.2.1 Extension of the Excitation Signal 340 10.2.2 Spectral Envelope Estimation 342 10.2.2.1 Minimum Mean Square Error Estimation 344 10.2.2.2 Conditional Maximum A Posteriori Estimation 345 10.2.2.3 Extensions 345 10.2.2.4 Simplifications 346 10.2.3 Energy Envelope Estimation 346 10.3 Speech Codecs with Integrated BWE 349 10.3.1 BWE in the GSM Full-Rate Codec 349 10.3.2 BWE in the AMR Wideband Codec 351 10.3.3 BWE in the ITU Codec G.729.1 353 References 355 11 NELE: Near-End Listening Enhancement 361 11.1 Frequency Domain NELE (FD) 363 11.1.1 Speech Intelligibility Index NELE Optimization 364 11.1.1.1 SII-Optimized NELE Example 367 11.1.2 Closed-Form Gain-Shape NELE 368 11.1.2.1 The NoiseProp Shaping Function 370 11.1.2.2 The NoiseInverse Strategy 371 11.1.2.3 Gain-Shape Frequency Domain NELE Example 372 11.2 Time Domain NELE (TD) 374 11.2.1 NELE Processing using Linear Prediction Filters 374 References 378 12 Single-Channel Noise Reduction 381 12.1 Introduction 381 12.2 Linear MMSE Estimators 383 12.2.1 Non-causal IIR Wiener Filter 384 12.2.2 The FIR Wiener Filter 386 12.3 Speech Enhancement in the DFT Domain 387 12.3.1 The Wiener Filter Revisited 388 12.3.2 Spectral Subtraction 390 12.3.3 Estimation of the A Priori SNR 391 12.3.3.1 Decision-Directed Approach 392 12.3.3.2 Smoothing in the Cepstrum Domain 392 12.3.4 Quality and Intelligibility Evaluation 393 12.3.4.1 Noise Oversubtraction 396 12.3.4.2 Spectral Floor 396 12.3.4.3 Limitation of the A Priori SNR 396 12.3.4.4 Adaptive Smoothing of the Spectral Gain 396 12.3.5 Spectral Analysis/Synthesis for Speech Enhancement 397 12.4 Optimal Non-linear Estimators 397 12.4.1 Maximum Likelihood Estimation 398 12.4.2 Maximum A Posteriori Estimation 400 12.4.3 MMSE Estimation 400 12.4.3.1 MMSE Estimation of Complex Coefficients 401 12.4.3.2 MMSE Amplitude Estimation 401 12.5 Joint Optimum Detection and Estimation of Speech 405 12.6 Computation of Likelihood Ratios 407 12.7 Estimation of the A Priori and A Posteriori Probabilities of Speech Presence 408 12.7.1 Estimation of the A Priori Probability 409 12.7.2 A Posteriori Speech Presence Probability Estimation 409 12.7.3 SPP Estimation Using a Fixed SNR Prior 410 12.8 VAD and Noise Estimation Techniques 411 12.8.1 Voice Activity Detection 411 12.8.1.1 Detectors Based on the Subband SNR 412 12.8.2 Noise Power Estimation Based on Minimum Statistics 413 12.8.3 Noise Estimation Using a Soft-Decision Detector 416 12.8.4 Noise Power Tracking Based on Minimum Mean Square Error Estimation 417 12.8.5 Evaluation of Noise Power Trackers 419 12.9 Noise Reduction with Deep Neural Networks 420 12.9.1 Processing Model 421 12.9.2 Estimation Targets 422 12.9.3 Loss Function 423 12.9.4 Input Features 423 12.9.5 Data Sets 423 References 425 13 Dual-Channel Noise and Reverberation Reduction 435 13.1 Dual-Channel Wiener Filter 435 13.2 The Ideal Diffuse Sound Field and Its Coherence 438 13.3 Noise Cancellation 442 13.3.1 Implementation of the Adaptive Noise Canceller 444 13.4 Noise Reduction 445 13.4.1 Principle of Dual-Channel Noise Reduction 446 13.4.2 Binaural Equalization–Cancellation and Common Gain Noise Reduction 447 13.4.3 Combined Single- and Dual-Channel Noise Reduction 449 13.5 Dual-Channel Dereverberation 449 13.6 Methods Based on Deep Learning 452 References 453 14 Acoustic Echo Control 457 14.1 The Echo Control Problem 457 14.2 Echo Cancellation and Postprocessing 462 14.2.1 Echo Canceller with Center Clipper 463 14.2.2 Echo Canceller with Voice-Controlled Soft-Switching 463 14.2.3 Echo Canceller with Adaptive Postfilter 464 14.3 Evaluation Criteria 465 14.3.1 System Distance 466 14.3.2 Echo Return Loss Enhancement 466 14.4 The Wiener Solution 467 14.5 The LMS and NLMS Algorithms 468 14.5.1 Derivation and Basic Properties 468 14.6 Convergence Analysis and Control of the LMS Algorithm 470 14.6.1 Convergence in the Absence of Interference 471 14.6.2 Convergence in the Presence of Interference 473 14.6.3 Filter Order of the Echo Canceller 476 14.6.4 Stepsize Parameter 477 14.7 Geometric Projection Interpretation of the NLMS Algorithm 479 14.8 The Affine Projection Algorithm 481 14.9 Least-Squares and Recursive Least-Squares Algorithms 484 14.9.1 The Weighted Least-Squares Algorithm 484 14.9.2 The RLS Algorithm 485 14.9.3 NLMS- and Kalman-Algorithm 488 14.9.3.1 NLMS Algorithm 490 14.9.3.2 Kalman Algorithm 490 14.9.3.3 Summary of Kalman Algorithm 492 14.9.3.4 Remarks 492 14.10 Block Processing and Frequency Domain Adaptive Filters 493 14.10.1 Block LMS Algorithm 494 14.10.2 Frequency Domain Adaptive Filter (FDAF) 495 14.10.2.1 Fast Convolution and Overlap-Save 496 14.10.2.2 FLMS Algorithm 499 14.10.2.3 Improved Stepsize Control 502 14.10.3 Subband Acoustic Echo Cancellation 502 14.10.4 Echo Canceller with Adaptive Postfilter in the Frequency Domain 503 14.10.5 Initialization with Perfect Sequences 505 14.11 Stereophonic Acoustic Echo Control 506 14.11.1 The Non-uniqueness Problem 508 14.11.2 Solutions to the Non-uniqueness Problem 508 References 510 15 Microphone Arrays and Beamforming 517 15.1 Introduction 517 15.2 Spatial Sampling of Sound Fields 518 15.2.1 The Near-field Model 518 15.2.2 The Far-field Model 519 15.2.3 Sound Pickup in Reverberant Spaces 521 15.2.4 Spatial Correlation Properties of Acoustic Signals 522 15.2.5 Uniform Linear and Circular Arrays 522 15.2.6 Phase Ambiguity in Microphone Signals 523 15.3 Beamforming 524 15.3.1 Delay-and-Sum Beamforming 525 15.3.2 Filter-and-Sum Beamforming 526 15.4 Performance Measures and Spatial Aliasing 528 15.4.1 Array Gain and Array Sensitivity 528 15.4.2 Directivity Pattern 529 15.4.3 Directivity and Directivity Index 531 15.4.4 Example: Differential Microphones 531 15.5 Design of Fixed Beamformers 534 15.5.1 Minimum Variance Distortionless Response Beamformer 535 15.5.2 MVDR Beamformer with Limited Susceptibility 537 15.5.3 Linearly Constrained Minimum Variance Beamformer 538 15.5.4 Max-SNR Beamformer 539 15.6 Multichannel Wiener Filter and Postfilter 540 15.7 Adaptive Beamformers 542 15.7.1 The Frost Beamformer 542 15.7.2 Generalized Side-Lobe Canceller 544 15.7.3 Generalized Side-lobe Canceller with Adaptive Blocking Matrix 546 15.7.4 Model-Based Parsimonious-Excitation-Based GSC 547 15.8 Non-linear Multi-channel Noise Reduction 550 References 551 Index 555

Peter Vary is former Head of the Institute of Communication Systems at RWTH Aachen University, Germany. Professor Vary is a Fellow of IEEE, EURASIP, and ITG, and has been a Distinguished Lecturer of the IEEE Signal Processing Society. Rainer Martin is Head of the Institute of Communication Acoustics at Ruhr-Universität Bochum, Germany. Professor Martin is a Fellow of the IEEE. Both authors have been actively involved in speech processing research and teaching over several decades.

See Also