Video and 3D HDTV

by ; ;
  • ISBN13:


  • ISBN10:


  • Format: Hardcover
  • Copyright: 2013-11-25
  • Publisher: Iste/Hermes Science Pub

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $139.00 Save up to $20.85
  • Rent Book $118.15
    Add to Cart Free Shipping


Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
  • The Rental copy of this book is not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.


While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.
The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide the necessary elements for understanding the underlying computer-based science of these technologies. They consider applications and perspectives previously unexplored due to technological limitations.
This book guides the reader through the production process of 3D videos; from acquisition, through data treatment and representation, to 3D diffusion. Several types of camera systems are considered (multiscopic or multiview) which lead to different acquisition, modeling and storage-rendering solutions. The application of these systems is also discussed to illustrate varying performance benefits, making this book suitable for students, academics, and also those involved in the film industry.


Part 1. 3D Acquisition of Scenes
1. Foundation, Laurent Lucas, Yannick Remion and Céline Loscos.
2. Digital Cameras: Definitions and Principles, Min H. Kim, Nicolas Hautière and Céline Loscos.
3. Multiview Acquisition Systems, Frédéric Devernay, Yves Pupulin and Yannick Remion.
4. Shooting and Viewing Geometries in 3DTV, Jessica Prévoteau, Laurent Lucas and Yannick Remion.
5. Camera Calibration: Geometric and Colorimetric Correction, Vincent Nozick and Jean-Baptiste Thomas.
Part 2. Description/Reconstruction of 3D Scenes
6. Feature Points Detection and Image Matching, Michel Desvignes, Lara Younes and Barbara Romaniuk.
7. Multi- and Stereoscopic Matching, Depth and Disparity, Stéphanie Prévost, Cédric Niquin, Sylvie Chambon and Guillaume Gales.
8. 3D Scene Reconstruction and Structuring, Ludovic Blache, Muhannad Ismael and Philippe Souchet.
9. Synthesizing Intermediary Viewpoints, Luce Morin, Olivier Le Meur, Christine Guillemot, Vincent Jantet and Josselin Gautier.
Part 3. Standards and Compression of 3D Video
10. Multiview Video Coding (MVC), Benjamin Battin, Philippe Vautrot, Marco Cagnazzo and Frédéric Dufaux.
11. 3D Mesh Compression, Florent Dupont, Guillaume Lavoué and Marc Antonini.
12. Coding Methods for Depth Videos, Elie Gabriel Mora, Joël Jung, Béatrice Pesquet-Popescu and Marco Cagnazzo.
13. Stereoscopic Watermarking, Mihai Mitrea, Afef Chammem and Françoise Prêteux.
Part 4. Rendering and 3D Display
14. HD 3DTV and Autostereoscopy, Venceslas Biri and Laurent Lucas.
15. Augmented and/or Mixed Reality, Gilles Simon and Marie-Odile Berger.
16. Visual Comfort and Fatigue in Stereoscopy, Matthieu Urvoy, Marcus Barkowsky, Jing Li and Patrick Le Callet.
17. 2D–3D Conversion, David Grogna, Antoine Lejeune and Benoît Michel.
Part 5. Implementation and Outlets
18. 3D Model Retrieval, Jean-Philippe Vandeborre, Hedi Tabia and Mohamed Daoudi.
19. 3D HDR Images and Videos: Acquisition and Restitution, Jennifer Bonnard, Gilles Valette, Céline Loscos and Jean-Michel Nourrit.
20. 3D Visualization for Life Sciences, Aassif Benassarou, Sylvia Piotin, Manuel Dauchez and Dimitri Papathanassiou.
21. 3D Reconstruction of Sport Scenes, Sébastien Mavromatis and Jean Sequeira.
22. Experiments in Live Capture and Transmission of Stereoscopic 3D Video Images, David Grogna and Jacques G.Verly.

About the Authors

Laurent Lucas currently leads the SIC research group and is in charge of the virtual reality platform of the URCA (University of Reims Champagne Ardenne) in France. His research interests include visualization and co-operation between image processing and computer graphics, particularly in 3DTV, and their applications.
Céline Loscos is Professor at the URCA, within the CReSTIC laboratory, and teaches computer science at the University Institute of Technology (IUT) in Champagne Ardenne, France.
Yannick Remion’s research interests include dynamic animation, simulation and co-operation between image processing and computer graphics as well as 3D vision.

Table of Contents

Chapter 1. Fundamentals

1.1. Introduction

1.2. A short history

1.2.1. Pinhole model

1.2.2. 3D and binocular vision

1.2.3. Reconstruction

1.3. Stereopsis and 3D physiological aspects

1.4. 3D computer vision

1.5. Conclusion

1.6. Bibliography

Chapter 2. Digital cameras: definitions and principles

2.1. Introduction

2.2. Acquiring light: physics fundamentals

2.2.1. Radiometry and photometry Scene illumination

2.2.2. Wavelengths and color spaces

2.3. Digital cameras

2.3.1. Optical components Camera optics Errors and corrections

2.3.2. Electronic components Camera sensors Digital noise and noise removal algorithms

2.3.3. Main camera functions and control Autobracketing

2.3.4. Image storage formats

2.4. Camera, human vision and color

2.4.1. Adapting optics and electronic to human perception

2.4.2. Color control Camera response Color characterization

2.5. Outperforming

2.5.1. HDR imaging

2.5.2. Hyperspectral acquisition

2.6. Conclusion

2.7. Bibliography

Chapter 3: Multiview acquisition systems

3.1. Introduction: what is a multiview acquisition system?

3.2. Binocular systems

3.3. Lateral or directional multiview systems

3.4. Surrounding or omnidirectional multiview systems

3.5. Hybrid systems: RGBZ and TOF

3.6. Conclusion

3.7. Bibliography

Chapter 4: Shooting and viewing geometry for 3D TV 

4.1. Introduction

4.2. Output geomerty of imaginary relief

4.2.1. Description

4.2.2. Possible modelling

4.3. Capture geometry of imaginary relief

4.3.1. Type of geometry to be used

4.3.2. Possible modelling

4.4. Link between output and capture geometry

4.4.1. Geometric characterization of imaginary relief experience

4.4.2. Distortion models

4.5. Methodology for specifying multiscopic acquisition

4.5.1. Controlling relief distortion

4.5.2. Perfect relief effect

4.6. Implementation in OpenGL

4.7. Conclusion

4.8. Bibliography

Chapter 5: Geometric and colorimetric calibration and rectification

5.1. Introduction

5.2. Camera calibration

5.2.1. Introduction

5.2.2.Camera model

5.2.3. Calibration with a target

5.2.4. Automatic methods

5.3. Radial distortion

5.3.1. Introduction

5.3.2. When should distortion be corrected?

5.3.3. Radiale distortion correction models

5.4. Image rectification

5.4.1. Introduction Problematics

5.4.2. Image-based methods

5.4.3. Camera-based methods

5.4.4. Rectification of more than 2 images simultaneously

5.5. Camera colorimetric aspects

5.5.1. Applyed colorimetry

5.5.2. Camera colorimetric calibration Estimation of F(k) and S(k) In practice

5.6. Conclusion

5.7. Bibliography

Chapter 6: Feature points detection and image matching

6.1. Introduction

6.2. Feature points

6.2.1. Points detectors Differential operators: Autocorrelation, Harris and Hessian Scale invariance using multi-scale analysis Corner intensity model

6.2.2. Contours and feature points detection Shapes detectors Curvature and scale space

6.2.3. Stable regions: IBR, MSER

6.3. Feature point descriptors

6.3.1. Scale-invariant feature transform: SIFT

6.3.2. Gradient Local Orientation Histogram: GLOH

6.3.3. DAISY descriptor

6.3.4. Speeded Up Robust Features: SURF

6.3.5. Multi-scale Oriented PatcheS: MOPS

6.3.6. Shape context

6.4. Image matching

6.4.1. Descriptors matching

6.4.2. Estimation of the geometric transform: matches grouping Generalized Hough Transform Graph matching RANSAC and variants

6.5. Conclusion

Chapter 7: Multi and Stereoscopic matching, depth and disparity

7.1. Introduction

7.2. Difficulties, primitives and density of stereoscopy matching

7.2.1. Difficulties

7.2.2. Primitives and density

7.3. Simplified geometry and disparity

7.4. Description of stereoscopic and multiscopic methods

7.4.1. Algorithms of local and global matching

7.4.2. Principal constraints

7.4.3. Energy costs

7.5. Methods with explicit consideration of occultations

7.5.1. Stereoscopic local method – propagation of seeds Initialization of germs Approach by propagation Regulation by region sounding

7.5.2 Multiscopic global method Formulation of multiscopic matching Energy function and constraint of geometric consistency Global selection and partition construction Results

7.6. Conclusion

7.7. Bibliography

Chapter 8: Multiview reconstruction

8.1. Problematics

8.2. Visual hull-based reconstruction

8.2.1. Methods to extract visual hulls

8.2.2. Reconstruction methods

8.2.3. Improving volume reconstruction Voxel Coloring Space Carving

8.3. Industrial implementation

8.3.1. Hardware acceleration

8.3.2. Results

8.4. Temporal structuration of reconstructions

8.4.1. Extraction of a generic skeleton

8.4.2. Computation of motion fields

8.5. Conclusion

8.6. Bibliography

Chapter 9: Synthesis of intermediate views

9.1. Introduction

9.2. Interpolation/extrapolation view synthesis

9.2.1. Direct and inverse projections Equations of direct projection Direct projection artefacts Inverse projection inverse

9.2.2. Limiting view synthesis artefacts Cracks Ghost outlines Open zones

9.2.3. View interpolation Fusion of virtual views Detection and smoothing of interpolation artefacts Float textures View extrapolation

9.3. Open zone filling

9.3.1. State of the art on 2D inpainting techniques Diffusion-based methods Similarity-based methods

9.3.2. 3D inpainting Crimini et al. [CRI 04] extension to 3D context Global optimisation-based inpainting

9.4. Conclusion

9.5. Bibliography

Chapter 10: Encoding multiview videos

10.1 Introduction

10.2 Compression of stereoscopic videos

10.2.1 3D formats Frame compatible Mixed Resolution Stereo 2D-plus-depth

10.2.2 Associated coding techniques Simulcast MPEG-C and H.264/AVC APS H.264/MVC Stereo Profile

10.3 Compression of multiview videos

10.3.1 3D formats MVV and MVD LDI and LDV DES

10.3.2 Associated coding techniques H.264/MVC Multiview Profile LDI-dedicated methods

10.4 Conclusion

10.5 Bibliography

Chapter 11: 3D mesh compression

11.1. Introduction

11.2. Background on coding: The rate-distortion theory

11.3. Multi-resolution coding of surface meshes

11.4. Topological and progressive coding

11.4.1. Mono-resolution compression

11.4.2. Multi-resolution compression Connectivity-driven approaches Geometry-driven approaches

11.5. Mesh Sequences Compression

11.5.1. Definitions

11.5.2. Spatio-temporal prediction methods

11.5.3. Segmentation based methods

11.5.4. Transformation based methods

11.6. Quality assessment: classical and perceptual metrics

11.6.1. Classical metrics

11.6.2. Perceptual metrics

11.7. Conclusion

11.8. Bibliography

Chapter 12: Depth Video Coding Technologies

12.1 Introduction

12.2 Analysis of a depth map characteristics

12.3 Depth video coding tools

12.3.1 Tools that exploit the inherent characteristics of depth maps Above block-level coding tools Block-level coding tools

12.3.2 Tools that exploit the correlation with the associated texture Prediction mode inheritance / selection Prediction information inheritance Spatial transforms

12.3.3 Tools that optimize depth video coding for the virtual views quality View synthesis optimization Distortion models

12.4 Conclusion

12.5 Bibliography

Chapter 13. Stereoscopic watermarking

13.1. Introduction

13.2. Stereoscopic watermarking constraints

13.2.1. Theoretical framework

13.2.2. Properties Transparency Robustness Data payload Computational cost

13.2.3. Corpus Design criteria Processed corpora

13.2.4. Conclusion

13.3. State-of-the-art on stereoscopic watermarking

13.4. Comparative study

13.4.1. Transparency Subjective evaluation Objective evaluation

13.4.2. Robustness

13.4.3. Computational cost

13.4.4. Conclusion

13.5. Conclusion and perspectives

13.6. References

Chapter 14: 3D HD TV and autostereoscopy


14.2.Technological principles

14.2.1.Stereoscopic devices with glasses

14.2.2.Autostereoscopic devices


14.2.4.Mesurements of autostereoscopic display

14.3. Mixing filters

14.4.Generating and enterlacing views

14.4.1.Virtual view generation

14.4.2.Enterlacing views

14.5. Futur developments



Chapter 15:  Augmented and/or mixed reality

15.1. Introduction

15.2. Real-time pose computation

15.2.1. Requirements for pose computation

15.2.2. Model/image feature matching Iterative tracking methods Recognition methods Real-time constraint

15.2.3. Pose computation: the main PnP algorithms Reprojection error minimization Direct methods

15.2.4. Pose computation and planar surfaces

15.3. Model acquisition

15.3.1. Offline modelization

15.3.2. Online modelization

15.4. Conclusion

15.5. Bibliography

Chapter 16. Visual comfort and visual fatigue for stereoscopic restitution

16.1. Introduction

16.2. Visual comfort and fatigue: definition and evaluation

16.2.1. Visual fatigue

16.2.2. Visual comfort and discomfort

16.2.3. Assessment and evaluation of fatigue and discomfort

16.3. Symptoms and signs of fatigue and discomfort

16.3.1. Ocular and oculomotor fatigue

16.3.2. Cognitive fatigue

16.3.3. Symptoms and signs of discomfort

16.4. Sources of fatigue and discomfort

16.4.1. Ocular constraints

16.4.2. Cognitive constraints

16.5. Application to 3D displays and contents

16.5.1. Comfort zone

16.5.2. Restitution defects

16.5.3. Accommodation and blur

16.5.4. Visual attention

16.5.5. Null or erroneous motion parallax

16.5.6. Exposure duration and training effects

16.6. Predicting visual fatigue and discomfort: emerging models

16.7. Conclusion

16.8. Bibliography/references

Chapter 17: 2D to 3D conversion


17.2. 2D-3D conversion workflow

17.3.Content preparation for conversion

17.3.1.Depth script

17.3.2. Video advantage on fix images

17.3.3. Automatic conversion decoy

17.3.4.Special cases of automatic conversion

17.3.5.Optimal content for 2D-3D conversion

17.4. Conversion steps

17.4.1. Segmentation step

17.4.2.Depth map, computation and propagation

17.4.3.Missing image generation

17.5.3D-3D conversion



Chapter 18. 3D-model retrieval

18.1. Introduction

18.2. General principles of shape retrieval

18.3. Global 3D-shape descriptors

18.3.1. Shape descriptor histogram

18.3.2. Spherical harmonics

18.4. 2D-view based methods

18.5. Local 3D-shape descriptors

18.5.1. 3D-shape spectrum descriptor

18.5.2. 3D-shape context

18.5.3. Spin-images

18.5.4. Heat cernel Signature

18.6. 3D-shape similarities

18.6.1. Reeb graphs

18.6.2. Bof-of-Words

18.7. 3D-shape retrieval in 3D-videos

18.7.1. Action recognition in 3D-videos

18.7.2. Facial expression recognition in 3D-videos

18.8. Performance evaluation of shape retrieval methods

18.8.1. Statistic tools for evaluation

18.8.2. Benchmarks

18.9. Applications

18.9.1. Browsing in a collection of 3D-models

18.9.2. Modeling by example

18.9.3. Decision aid

18.9.4. 3D-face recognition

18.10. Conclusion

18.11. References

Chapter 19: 3D HDR images and videos: acquisition and restitution

19.1. Introduction

19.2. HDR and 3D acquisition

19.2.1. Subspace 1D: HDR images

19.2.2. Subspace 2D: HDR videos

19.2.3. Subspace 2D: 3DHDR images Stereo matching for HDR reconstruction Discussion on color data consistency

19.2.4. Extension to the whole space: 3DHDR videos

19.3. 3D HDR rendering

19.3.1. Rendering on a 3D-dedicated display

19.3.2. Rendering on an HDR-dedicated display

19.4. Conclusion

19.5. Bibliography

Chapter 20: 3D TV visualization for life sciences


20.2.Scientific visualization

20.2.1. 3D construction

20.2.2.Interactivity visualization

20.3.Medical imaging

20.3.1.Volumic visualization medical imaging

20.4. Molecular modeling

20.4.1.Classical modes of visualization

20.4.2.Molecular modeling in relief



Chapter 21: 3D reconstruction of sport scenes


21.2.Automatic selection of region of interest

21.2.1.Region of interest role and caracteristics

21.2.2.Color space segmentation

21.2.3.Spacial consistency

21.3.Primitive extraction by HOUGH transform

21.3.1.Ellipsoid segment detection

21.4.Primitive/model matching

21.4.1.Line beams



Chapter 22: Experimental, live retransmissions in stereoscopic 3D (S-3D)


22.2.Show retransmissions

22.3.Surgery retransmissions

22.4. Steadicam magazine retransmissions

22.5.Transatlantic video-presentation retransmission

22.6.Bicycle competition retransmissions



Rewards Program

Write a Review