Note: Supplemental materials are not guaranteed with Rental or Used book purchases.
Purchase Benefits
What is included with this book?
Preface | p. vii |
List of Contributors | p. xv |
Intelligent Environments: Methods, Algorithms and Applications | p. 1 |
Intelligent Environments | p. 1 |
What Is An Intelligent Environment? | p. 2 |
How Is An Intelligent Environment Built? | p. 2 |
Technology for Intelligent Environments | p. 2 |
Research Projects | p. 4 |
Private Spaces | p. 4 |
Public Spaces | p. 5 |
Middleware | p. 7 |
Chapter Themes in This Collection | p. 8 |
Conclusion | p. 9 |
References | p. 10 |
A Pervasive Sensor System for Evidence-Based Nursing Care Support | p. 13 |
Introduction | p. 13 |
Evidence-Based Nursing Care Support | p. 14 |
Background of the Project | p. 14 |
Concept of Evidence-Based Nursing Care Support | p. 15 |
Initial Goal of the Project: Falls Prevention | p. 16 |
Second Goal of the Project: Obtaining ADL of Inhabitants | p. 17 |
Related Work | p. 18 |
Overview and Implementations of the System | p. 19 |
Overview of the Evidence-Based Nursing Care Support System | p. 19 |
System Implementations | p. 20 |
Experiments and Analyses | p. 24 |
Tracking a Wheelchair for Falls Prevention | p. 24 |
Activity Transition Diagram: Transition of Activities in One Day | p. 25 |
Quantitative Evaluation of Daily Activities | p. 26 |
Probability of "Toilet" Activity | p. 28 |
Discussion of the Experimental Results | p. 29 |
Prospect of the Evidence-Based Nursing Care Support System | p. 30 |
Conclusions | p. 31 |
References | p. 32 |
Anomalous Behavior Detection: Supporting Independent Living | p. 35 |
Introduction | p. 35 |
Related work | p. 36 |
Methodology | p. 37 |
Unsupervised Classification Techniques | p. 37 |
Using HMM to Model Behavior | p. 38 |
Experimental Setup and Data Collection | p. 39 |
Noisy Data: Sources of Error | p. 40 |
Learning activities | p. 40 |
Experimental Results | p. 41 |
Instance Class Annotation | p. 41 |
Data Preprocessing | p. 41 |
Models: Unsupervised Classification: Clustering and Allocation of Activities to Clusters | p. 43 |
Behaviors: Discovering Patterns in Activities | p. 45 |
Behaviors: Discovering Anomalous Patterns of Activity | p. 46 |
Discussion | p. 48 |
Conclusions | p. 49 |
References | p. 49 |
Sequential Pattern Mining for Cooking-Support Robot | p. 51 |
Introduction | p. 51 |
System Design | p. 53 |
Inference from Series of Human Actions | p. 53 |
Time Sequence Data Mining | p. 54 |
Human Behavior Inference Algorithm | p. 54 |
Activity Support of Human | p. 57 |
Implementation | p. 59 |
IC Tag System | p. 59 |
Inference of Human's Next Action | p. 60 |
Cooking Support Interface | p. 61 |
Experimental Results | p. 63 |
Conclusions | p. 65 |
References | p. 66 |
Robotic, Sensory and Problem-Solving Ingredients for the Future Home | p. 69 |
Introduction | p. 69 |
Components of the Multiagent System | p. 70 |
The Robotic Platform Mobility Subsystem | p. 71 |
The Interaction Manager | p. 73 |
Environmental Sensors for People Tracking and Posture Recognition | p. 74 |
Monitoring Activities of Daily Living | p. 76 |
Schedule Representation and Execution Monitoring | p. 77 |
Constraint Management in the RoboCare Context | p. 78 |
From Constraint Violations to Verbal Interaction | p. 81 |
Multiagent Coordination Infrastructure | p. 82 |
Casting the MAC Problem to DCOP | p. 83 |
Cooperatively Solving the MAC Problem | p. 86 |
Conclusions | p. 87 |
References | p. 88 |
Ubiquitous Stereo Vision for Human Sensing | p. 91 |
Introduction | p. 91 |
Ubiquitous Stereo Vision | p. 93 |
Concept of Ubiquitous Stereo Vision | p. 93 |
Server-Client Model for USV | p. 93 |
Real Utilization Cases | p. 94 |
Hierarchical Utilization of 3D Data and Personal Recognition | p. 95 |
Acquisition of 3D Range Information | p. 95 |
Projection to Floor Plane | p. 96 |
Recognition of Multiple Persons and Interface | p. 98 |
Pose Recognition for Multiple People | p. 99 |
Personal Identification | p. 100 |
Interface for Space Control | p. 101 |
Human Monitoring in Open Space (Safety Management Application) | p. 101 |
Monitoring Railroad Crossing | p. 101 |
Station Platform Edge Safety Management | p. 103 |
Monitoring Huge Space | p. 104 |
Conclusion and Future Work | p. 105 |
References | p. 106 |
Augmenting Professional Training, an Ambient Intelligence Approach | p. 109 |
Introduction | p. 109 |
Color Tracking of People | p. 112 |
Counting People by Spatial Relationship Analysis | p. 113 |
Simple People Counting Algorithm | p. 113 |
Graphs of Blobs | p. 114 |
Estimation of Distance Between Blobs | p. 116 |
Temporal Pyramid for Distance Estimation | p. 117 |
Probabilistic Estimation of Groupings | p. 119 |
Grouping Blobs | p. 120 |
Experimental Results | p. 121 |
Conclusions | p. 124 |
References | p. 124 |
Stereo Omnidirectional System (SOS) and Its Applications | p. 127 |
Introduction | p. 127 |
System Configuration | p. 128 |
Image integration | p. 131 |
Generation of Stable Images at Arbitrary Rotation | p. 133 |
An example Application: Intelligent Electric Wheelchair | p. 136 |
Overview | p. 136 |
System Configuration | p. 136 |
Obstacle Detection | p. 138 |
Gesture / Posture Detection | p. 140 |
Conclusions | p. 140 |
References | p. 140 |
Video Analysis for Ambient Intelligence in Urban Environments | p. 143 |
Introduction | p. 143 |
Visual Data for Urban AmI | p. 144 |
Video Surveillance in Urban Environment | p. 145 |
The LAICA Project | p. 148 |
Automatic Video Processing for People Tracking | p. 149 |
People Detection and Tracking from Single Static Camera | p. 150 |
People Detection and Tracking from Distributed Cameras | p. 152 |
People Detection and Tracking from Moving Cameras | p. 154 |
Privacy and Ethical Issues | p. 155 |
References | p. 157 |
From Monomodal to Multimodal: Affect Recognition Using Visual Modalities | p. 161 |
Introduction | p. 161 |
Organization of the Chapter | p. 163 |
From Monomodal to Multimodal: Changes and Challenges | p. 164 |
Background Research | p. 164 |
Data Collection | p. 168 |
Data Annotation | p. 169 |
Synchrony/Asynchrony Between Modalities | p. 171 |
Data Integration/Fusion | p. 172 |
Information Complementarity/Redundancy | p. 174 |
Information Content of Modalities | p. 176 |
Monomodal Systems Recognizing Affective Face or Body Movement | p. 177 |
Multimodal Systems Recognizing Affect from Face and Body Movement | p. 179 |
Project 1: Multimodal Affect Analysis for Future Cars | p. 179 |
Project 2: Emotion Analysis in Man-Machine Interaction Systems | p. 182 |
Project 3: Multimodal Affect Recognition in Learning Environments | p. 183 |
Project 4: FABO-Fusing Face and Body Gestures for Bimodal Emotion Recognition | p. 184 |
Multimodal Affect Systems: The Future | p. 185 |
References | p. 187 |
Importance of Vision in Human-Robot Communication: Understanding Speech Using Robot Vision and Demonstrating Proper Actions to Human Vision | p. 191 |
Introduction | p. 191 |
Understanding Simplified Utterances Using Robot Vision | p. 193 |
Inexplicit Utterances | p. 193 |
Information Obtained by Vision | p. 194 |
Language Processing | p. 195 |
Vision Processing | p. 195 |
Synchronization Between Speech and Vision | p. 197 |
Experiments | p. 199 |
Communicative Head Gestures for Museum Guide Robots | p. 200 |
Observations from Guide-Visitor Interaction | p. 201 |
Prototype Museum Guide Robot | p. 203 |
Experiments at a Museum | p. 206 |
Conclusion | p. 208 |
References | p. 209 |
Index | p. 211 |
Table of Contents provided by Ingram. All Rights Reserved. |
The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.
The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.