PERHAPS A GIFT VOUCHER FOR MUM?: MOTHER'S DAY

Close Notification

Your cart does not contain any items

$273.95

Hardback

Not in-store but you can order this
How long will it take?

QTY:

English
ISTE Ltd
15 December 2023
Omnidirectional cameras, vision sensors that can capture 360° images, have in recent years had growing success in computer vision, robotics and the entertainment industry. In fact, modern omnidirectional cameras are compact, lightweight and inexpensive, and are thus being integrated in an increasing number of robotic platforms and consumer devices. However, the special format of output data requires tools that are appropriate for camera calibration, signal analysis and image interpretation.

This book is divided into six chapters written by world-renowned scholars. In a rigorous yet accessible way, the mathematical foundation of omnidirectional vision is presented, from image geometry and camera calibration to image processing for central and non-central panoramic systems. Special emphasis is given to fisheye cameras and catadioptric systems, which combine mirrors with lenses. The main applications of omnidirectional vision, including 3D scene reconstruction and robot localization and navigation, are also surveyed. Finally, the recent trend towards AI-infused methods (deep learning architectures) and other emerging research directions are discussed.

Edited by:   , ,
Imprint:   ISTE Ltd
Country of Publication:   United Kingdom
Weight:   644g
ISBN:   9781789451436
ISBN 10:   1789451434
Pages:   256
Publication Date:  
Audience:   Professional and scholarly ,  Undergraduate
Format:   Hardback
Publisher's Status:   Active
"Acknowledgments xi List of Acronyms xiii Preface xv Fabio MORBIDI and Pascal VASSEUR Chapter 1 Image Geometry 1 Peter STURM 1.1 Introduction 2 1.1.1 Outline of this chapter 5 1.2 Image formation and point-wise approximation 6 1.3 Projection and back-projection 7 1.4 Central and non-central cameras 12 1.5 ""Outer"" geometry: calibrated cameras 16 1.5.1 Given an image of a scene and a particular point in that image, where could the original point in the scene possibly be located? 17 1.5.2 Is it possible to precisely locate an object in 3D from a single image and if yes, what information is required to do so and how do we solve this problem mathematically? 17 1.5.3 Is it possible to estimate the motion of a camera just by taking images of an unknown scene? 19 1.5.4 Triangulation – reconstructing 3D points 21 1.5.5 Some remarks 22 1.6 ""Inner"" geometry: images of lines 24 1.7 Epipolar geometry 26 1.7.1 Nature of epipolar geometry 26 1.7.2 Dense stereo matching and rectification 31 1.8 Conclusion 35 1.9 Acknowledgments 35 1.10 References 36 Chapter 2 Models and Calibration Methods 39 Guillaume CARON 2.1 Introduction 39 2.2 Projection models 40 2.2.1 Perspective projection: a review 40 2.2.2 Ad hoc models 43 2.2.3 Unified central projection and its extensions 49 2.2.4 Generic models 54 2.3 Calibration methods 54 2.4 Conclusion 57 2.5 References 59 Chapter 3 Reconstruction of Environments 63 Maxime LHUILLIER 3.1 Prerequisites 64 3.1.1 Image rectification and matching constraints 64 3.1.2 From disparity to depth 65 3.1.3 Dynamic programming and semi-global matching methods (SGM) 66 3.1.4 Plane sweeping methods 67 3.1.5 Minimization of global energy (or cost function) 67 3.1.6 Propagation methods 68 3.1.7 Surface reconstruction methods 70 3.1.8 Estimation of the 3D using other sensors 72 3.2 Pros and cons for using omnidirectional cameras 72 3.2.1 Multi-cameras 73 3.2.2 Catadioptric cameras 73 3.2.3 Toward a wide use of the 360° cameras 75 3.3 Adapt dense stereo to omnidirectional cameras 75 3.3.1 Spherical rectifications 76 3.3.2 Cylindrical rectifications 79 3.3.3 Planar rectifications 80 3.3.4 Sphere sweeping 81 3.3.5 Neither sweeping nor standard rectification 83 3.4 Reconstruction from only one central image 84 3.4.1 Explicit use of geometric constraints 85 3.4.2 Deep learning 86 3.5 Reconstruction using stationary non-central camera 87 3.6 Reconstruction by a moving camera 89 3.6.1 From local to global models 89 3.6.2 Sparse approaches for local models 92 3.6.3 Sparse approaches for global models 93 3.6.4 Available software 98 3.7 Conclusion 99 3.8 References 100 Chapter 4 Catadioptric Processing and Adaptations 105 Fatima AZIZ, Ouiddad LABBANI-IGBIDA and C´edric DEMONCEAUX 4.1 Introduction 105 4.2 Preliminary concepts 106 4.2.1 Spherical equivalence models 106 4.2.2 Differential calculus and Riemannian geometry 108 4.3 Adapted image processing by differential calculus on quadratic surfaces 110 4.3.1 Riemannian geometry for hyperbolic mirrors 111 4.3.2 Riemannian geometry for spherical mirrors 111 4.3.3 Riemannian geometry for paraboloid mirrors 113 4.3.4 Application to active contour deformation 114 4.4 Adapted image processing by Riemannian geodesic metrics 115 4.4.1 Spatial Riemannian metric 117 4.4.2 Spatial-color metric 118 4.4.3 Application to Gaussian kernel based smoothing 119 4.4.4 Application to corner features detection 120 4.5 Adapted image processing by spherical geodesic distance 121 4.5.1 Neighborhood definition 123 4.5.2 Application to linear catadioptric image filtering 125 4.5.3 Application to corner features detection and matching 127 4.6 Conclusion 132 4.7 References 133 Chapter 5 Non-Central Sensors and Robot Vision 135 Sio-hoi IENG 5.1 Introduction 135 5.1.1 Generalities 137 5.1.2 Biological eyes 138 5.2 Catadioptric sensors: reflector computation 139 5.2.1 Caustic surface of a catadioptric system 140 5.2.2 Caustic surface computation 141 5.2.3 Reflector computation 144 5.2.4 Methods for reflector with no axial symmetry 146 5.3 Plenoptic vision as a unique form of non-central vision 149 5.3.1 Formalism and design 150 5.3.2 Plenoptic camera 151 5.3.3 Applications in robotic navigation: plenoptic visual odometry 152 5.4 Conclusion 155 5.5 References 157 Chapter 6 Localization and Navigation with Omnidirectional Images 159 Helder Jesus ARAUJO, Pedro MIRALDO and Nathan CROMBEZ 6.1 Introduction 160 6.2 Modeling image formation of omnidirectional cameras 163 6.2.1 Central systems 165 6.2.2 Non-central systems 169 6.2.3 Mirrors with special profiles 171 6.2.4 Fisheye lenses 175 6.3 Localization and navigation 177 6.3.1 Metric localization and mapping 178 6.3.2 Topological localization and mapping 184 6.3.3 Visual odometry 189 6.3.4 SLAM 197 6.3.5 Multi-robot formation 206 6.4 Conclusion 209 6.5 References 210 Conclusion and Perspectives 219 Fabio MORBIDI and Pascal VASSEUR List of Authors 223 Index 225"

Pascal Vasseur is Full Professor at the University of Picardie Jules Verne, France, and a member of the MIS laboratory. He is the head of the Department of Informatics. His research interests include computer vision and image processing and their applications in intelligent transportation systems and mobile robotics. Fabio Morbidi is Associate Professor at the University of Picardie Jules Verne, France, and a member of the MIS laboratory. He is an IEEE senior member and currently serves as an associate editor for the IEEE Transactions on Robotics. His research interests include network systems and robotic vision.

See Also