PERHAPS A GIFT VOUCHER FOR MUM?: MOTHER'S DAY

Close Notification

Your cart does not contain any items

$209.95

Hardback

Not in-store but you can order this
How long will it take?

QTY:

English
John Wiley & Sons Inc
28 July 2005
The migration of immersive media towards telecommunication applications is advancing rapidly.  Impressive progress in the field of media compression, media representation, and the larger and ever increasing bandwidth available to the customer, will foster the introduction of these services in the future. One of the key components for the envisioned applications is the development from two-dimensional towards three-dimensional audio-visual communications.

With contributions from key experts in the field, 3D Videocommunication:

provides a complete overview of existing systems and technologies in 3D video communications and provides guidance on future trends and research; considers all aspects of the 3D videocommunication processing chain including video coding, signal processing and computer graphics; focuses on the current state-of-the-art and highlights the directions in which the technology is likely to move; discusses in detail the relevance of 3D videocommunication for telepresence systems and immersive media; and provides an exhaustive bibliography for further reading.

Researchers and students interested in the field of 3D audio-visual communications will find 3D Videocommunication a valuable resource, covering a broad overview of the current state-of-the-art. Practical engineers from industry will also find it a useful tool in envisioning and building innovative applications.

Edited by:   , , , , , ,
Imprint:   John Wiley & Sons Inc
Country of Publication:   United States
Dimensions:   Height: 250mm,  Width: 174mm,  Spine: 26mm
Weight:   822g
ISBN:   9780470022719
ISBN 10:   047002271X
Pages:   320
Publication Date:  
Audience:   Professional and scholarly ,  Undergraduate
Format:   Hardback
Publisher's Status:   Active
List of Contributors xiii Symbols xix Abbreviations xxi Introduction 1 Oliver Schreer, Peter Kauff and Thomas Sikora Section I Applications of 3D Videocommunication 5 1 History of Telepresence 7 Wijnand A. IJsselsteijn 1.1 Introduction 7 1.2 The Art of Immersion: Barker’s Panoramas 10 1.3 Cinerama and Sensorama 11 1.4 Virtual Environments 14 1.5 Teleoperation and Telerobotics 16 1.6 Telecommunications 18 1.7 Conclusion 19 References 20 2 3D TV Broadcasting 23 Christoph Fehn 2.1 Introduction 23 2.2 History of 3D TV Research 24 2.3 A Modern Approach to 3D TV 26 2.3.1 A Comparison with a Stereoscopic Video Chain 28 2.4 Stereoscopic View Synthesis 29 2.4.1 3D Image Warping 29 2.4.2 A ‘Virtual’ Stereo Camera 30 2.4.3 The Disocclusion Problem 32 2.5 Coding of 3D Imagery 34 2.5.1 Human Factor Experiments 35 2.6 Conclusions 36 Acknowledgements 37 References 37 3 3D in Content Creation and Post-production 39 Oliver Grau 3.1 Introduction 39 3.2 Current Techniques for Integrating Real and Virtual Scene Content 41 3.3 Generation of 3D Models of Dynamic Scenes 44 3.4 Implementation of a Bidirectional Interface Between Real and Virtual Scenes 46 3.4.1 Head Tracking 49 3.4.2 View-dependent Rendering 50 3.4.3 Mask Generation 50 3.4.4 Texturing 51 3.4.5 Collision Detection 52 3.5 Conclusions 52 References 52 4 Free Viewpoint Systems 55 Masayuki Tanimoto 4.1 General Overview of Free Viewpoint Systems 55 4.2 Image Domain System 57 4.2.1 EyeVision 57 4.2.2 3D-TV 58 4.2.3 Free Viewpoint Play 59 4.3 Ray-space System 59 4.3.1 FTV (Free Viewpoint TV) 59 4.3.2 Bird’s-eye View System 60 4.3.3 Light Field Video Camera System 62 4.4 Surface Light Field System 64 4.5 Model-based System 65 4.5.1 3D Room 65 4.5.2 3D Video 66 4.5.3 Multi-texturing 67 4.6 Integral Photography System 68 4.6.1 NHK System 68 4.6.2 1D-II 3D Display System 70 4.7 Summary 70 References 71 5 Immersive Videoconferencing 75 Peter Kauff and Oliver Schreer 5.1 Introduction 75 5.2 The Meaning of Telepresence in Videoconferencing 76 5.3 Multi-party Communication Using the Shared Table Concept 79 5.4 Experimental Systems for Immersive Videoconferencing 83 5.5 Perspective and Trends 87 Acknowledgements 88 References 88 Section II 3D Data Representation and Processing 91 6 Fundamentals of Multiple-view Geometry 93 Spela Ivekovic, Andrea Fusiello and Emanuele Trucco 6.1 Introduction 93 6.2 Pinhole Camera Geometry 94 6.3 Two-view Geometry 96 6.3.1 Introduction 96 6.3.2 Epipolar Geometry 97 6.3.3 Rectification 102 6.3.4 3D Reconstruction 104 6.4 N-view Geometry 106 6.4.1 Trifocal Geometry 106 6.4.2 The Trifocal Tensor 108 6.4.3 Multiple-view Constraints 109 6.4.4 Uncalibrated Reconstruction from N views 110 6.4.5 Autocalibration 111 6.5 Summary 112 References 112 7 Stereo Analysis 115 Nicole Atzpadin and Jane Mulligan 7.1 Stereo Analysis Using Two Cameras 115 7.1.1 Standard Area-based Stereo Analysis 117 7.1.2 Fast Real-time Approaches 120 7.1.3 Post-processing 123 7.2 Disparity From Three or More Cameras 125 7.2.1 Two-camera versus Three-camera Disparity 127 7.2.2 Correspondence Search with Three Views 128 7.2.3 Post-processing 129 7.3 Conclusion 130 References 130 8 Reconstruction of Volumetric 3D Models 133 Peter Eisert 8.1 Introduction 133 8.2 Shape-from-Silhouette 135 8.2.1 Rendering of Volumetric Models 136 8.2.2 Octree Representation of Voxel Volumes 137 8.2.3 Camera Calibration from Silhouettes 139 8.3 Space-carving 140 8.4 Epipolar Image Analysis 143 8.4.1 Horizontal Camera Motion 143 8.4.2 Image Cube Trajectory Analysis 145 8.5 Conclusions 148 References 148 9 View Synthesis and Rendering Methods 151 Reinhard Koch and Jan-Friso Evers-Senne 9.1 The Plenoptic Function 152 9.1.1 Sampling the Plenoptic Function 152 9.1.2 Recording of the Plenoptic Samples 153 9.2 Categorization of Image-based View Synthesis Methods 154 9.2.1 Parallax Effects in View Rendering 154 9.2.2 Taxonomy of IBR Systems 156 9.3 Rendering Without Geometry 158 9.3.1 The Aspen Movie-Map 158 9.3.2 Quicktime VR 158 9.3.3 Central Perspective Panoramas 159 9.3.4 Manifold Mosaicing 159 9.3.5 Concentric Mosaics 161 9.3.6 Cross-slit Panoramas 162 9.3.7 Light Field Rendering 162 9.3.8 Lumigraph 163 9.3.9 Ray Space 164 9.3.10 Related Techniques 164 9.4 Rendering with Geometry Compensation 165 9.4.1 Disparity-based Interpolation 165 9.4.2 Image Transfer Methods 166 9.4.3 Depth-based Extrapolation 167 9.4.4 Layered Depth Images 168 9.5 Rendering from Approximate Geometry 169 9.5.1 Planar Scene Approximation 169 9.5.2 View-dependent Geometry and Texture 169 9.6 Recent Trends in Dynamic IBR 170 References 172 10 3D Audio Capture and Analysis 175 Markus Schwab and Peter Noll 10.1 Introduction 175 10.2 Acoustic Echo Control 176 10.2.1 Single-channel Echo Control 177 10.2.2 Multi-channel Echo Control 179 10.3 Sensor Placement 181 10.4 Acoustic Source Localization 182 10.4.1 Introduction 182 10.4.2 Real-time System and Results 183 10.5 Speech Enhancement 185 10.5.1 Multi-channel Speech Enhancement 186 10.5.2 Single-channel Noise Reduction 187 10.6 Conclusions 190 References 191 11 Coding and Standardization 193 Aljoscha Smolic and Thomas Sikora 11.1 Introduction 193 11.2 Basic Strategies for Coding Images and Video 194 11.2.1 Predictive Coding of Images 194 11.2.2 Transform Domain Coding of Images and Video 195 11.2.3 Predictive Coding of Video 198 11.2.4 Hybrid MC/DCT Coding for Video Sequences 199 11.2.5 Content-based Video Coding 201 11.3 Coding Standards 202 11.3.1 JPEG and JPEG 2000 202 11.3.2 Video Coding Standards 202 11.4 MPEG-4 — an Overview 204 11.4.1 MPEG-4 Systems 205 11.4.2 BIFS 205 11.4.3 Natural Video 206 11.4.4 Natural Audio 207 11.4.5 SNHC 208 11.4.6 AFX 209 11.5 The MPEG 3DAV Activity 210 11.5.1 Omnidirectional Video 210 11.5.2 Free-viewpoint Video 212 11.6 Conclusion 214 References 214 Section III 3D Reproduction 217 12 Human Factors of 3D Displays 219 Wijnand A. IJsselsteijn, Pieter J.H. Seuntiëns and Lydia M.J. Meesters 12.1 Introduction 219 12.2 Human Depth Perception 220 12.2.1 Binocular Disparity and Stereopsis 220 12.2.2 Accommodation and Vergence 222 12.2.3 Asymmetrical Binocular Combination 223 12.2.4 Individual Differences 224 12.3 Principles of Stereoscopic Image Production and Display 225 12.4 Sources of Visual Discomfort in Viewing Stereoscopic Displays 226 12.4.1 Keystone Distortion and Depth Plane Curvature 227 12.4.2 Magnification and Miniaturization Effects 228 12.4.3 Shear Distortion 229 12.4.4 Cross-talk 229 12.4.5 Picket Fence Effect and Image Flipping 230 12.5 Understanding Stereoscopic Image Quality 230 References 231 13 3D Displays 235 Siegmund Pastoor 13.1 Introduction 235 13.2 Spatial Vision 236 13.3 Taxonomy of 3D Displays 237 13.4 Aided-viewing 3D Display Technologies 238 13.4.1 Colour-multiplexed (Anaglyph) Displays 238 13.4.2 Polarization-multiplexed Displays 239 13.4.3 Time-multiplexed Displays 239 13.4.4 Location-multiplexed Displays 240 13.5 Free-viewing 3D Display Technologies 242 13.5.1 Electroholography 242 13.5.2 Volumetric Displays 243 13.5.3 Direction-multiplexed Displays 244 13.6 Conclusions 258 References 258 14 Mixed Reality Displays 261 Siegmund Pastoor and Christos Conomis 14.1 Introduction 261 14.2 Challenges for MR Technologies 263 14.3 Human Spatial Vision and MR Displays 264 14.4 Visual Integration of Natural and Synthetic Worlds 265 14.4.1 Free-form Surface-prism HMD 265 14.4.2 Waveguide Holographic HMD 266 14.4.3 Virtual Retinal Display 267 14.4.4 Variable-accommodation HMD 267 14.4.5 Occlusion Handling HMD 268 14.4.6 Video See-through HMD 269 14.4.7 Head-mounted Projective Display 269 14.4.8 Towards Free-viewing MR Displays 270 14.5 Examples of Desktop and Hand-held MR Systems 273 14.5.1 Hybrid 2D/3D Desktop MR System with Multimodal Interaction 273 14.5.2 Mobile MR Display with Markerless Video-based Tracking 275 14.6 Conclusions 278 References 279 15 Spatialized Audio and 3D Audio Rendering 281 Thomas Sporer and Sandra Brix 15.1 Introduction 281 15.2 Basics of Spatial Audio Perception 281 15.2.1 Perception of Direction 282 15.2.2 Perception of Distance 283 15.2.3 The Cocktail Party Effect 283 15.2.4 Final Remarks 284 15.3 Spatial Sound Reproduction 284 15.3.1 Discrete Multi-channel Loudspeaker Reproduction 284 15.3.2 Binaural Reproduction 287 15.3.3 Multi-object Audio Reproduction 287 15.4 Audiovisual Coherence 291 15.5 Applications 293 15.6 Summary and Outlook 293 References 293 Section IV 3D Data Sensors 297 16 Sensor-based Depth Capturing 299 João G.M. Gonçalves and Vítor Sequeira 16.1 Introduction 299 16.2 Triangulation-based Sensors 301 16.3 Time-of-flight-based Sensors 303 16.3.1 Pulsed Wave 304 16.3.2 Continuous-wave-based Sensors 304 16.3.3 Summary 308 16.4 Focal Plane Arrays 308 16.5 Other Methods 309 16.6 Application Examples 309 16.7 The Way Ahead 311 16.8 Summary 311 References 312 17 Tracking and User Interface for Mixed Reality 315 Yousri Abdeljaoued, David Marimon i Sanjuan, and Touradj Ebrahimi 17.1 Introduction 315 17.2 Tracking 316 17.2.1 Mechanical Tracking 317 17.2.2 Acoustic Tracking 317 17.2.3 Inertial Tracking 318 17.2.4 Magnetic Tracking 318 17.2.5 Optical Tracking 320 17.2.6 Video-based Tracking 320 17.2.7 Hybrid Tracking 323 17.3 User Interface 324 17.3.1 Tangible User Interfaces 324 17.3.2 Gesture-based Interfaces 325 17.4 Applications 328 17.4.1 Mobile Applications 328 17.4.2 Collaborative Applications 329 17.4.3 Industrial Applications 329 17.5 Conclusions 331 References 331 Index 335

Dr Oliver Schreer, Heinrich-Hertz-Institute, & TU Berlin, Germany Oliver Schreer is Adjunct Professor at the Faculty of Electrical Engineering and Computer Science, Technical University Berlin. He lectures on Image Processing in Videocommunications  and is a regular guest editor for the IEEE Transactions on Circuits, Systems and Video Technology. Dr Peter Kauff, Heinrich-Hertz-Institute, Berlin, Germany Peter Kauff is the head of the “Immersive Media & 3D Video” Group at Heinrich-Hertz-Institute (HHI), Fraunhofer Gesellschaft, Berlin. He has been involved in numerous German and European projects related to digital HDTV signal processing and coding, interactive MPEG-4-based services, and advanced 3D video processing for immersive tele-presence and immersive media. Professor Dr Thomas Sikora, Head of the Communication Systems Group, Technical University of Berlin, Berlin As the chairman of the ISO-MPEG video group (Moving Picture Experts Group), Dr Sikora was responsible for the development and standardization of the MPEG video coding algorithms. He frequently works as an industry consultant on issues related to interactive digital video. He is an appointed member of the Supervisory board of a number of German companies and international research organizations. He is an Associate Editor for IEEE Signal Processing Magazine and the EURASIP Signal Processing: Image Communication journal and currently serves as the Editor-in-Chief of the IEEE Transactions on Circuits and Systems for Video Technology.

Reviews for 3D Videocommunication: Algorithms, Concepts and Real-time Systems in Human Centred Communication

Praise for The Jackal of Nar: <br> John Marco has now joined the ranks of Robert Jordan, Terry Brooks, Stephen Donaldson, and, of course, J. R. R. Tolkien. <br>-- BookPage<br> <br> Absorbing, deftly plotted...with promising character developments and a well-rounded, satisfying end. <br>-- Kirkus Reviews<br> <br> Marco's first novel offers an unusual and imaginative mix of well-conceived magic with a technology that includes gunpowder and trench warfare. Its plot is rife with twists and interesting kinks, and should captivate most fantasy fans. <br>-- Publishers Weekly<br> <br> The Jackal of Nar introduces us to a world full of intrigue, villainy, magic and technology, producing a unique fantasy tale....I can't wait to see how the rest of the tale unfolds. <br>--Michael A. Stackpole <br> Introduces a marvelous new voice to the world of fantasy. Jackal is a stunning first novel and I eagerly await the next book of Nar. <br>--Allan Cole <br> A well-crafted military fantasy, fast-paced, and underscored with believable characters and politics. <br>--J.V. Jones <br>


See Also