(PhD Thesis presentation)
The use of wearable or on-body sensors to monitor the human behavior is now on the forefront of human activity recognition. Nevertheless, the actual results for human activity recognition are fairly constrained and generally restricted to ideal or laboratory scenarios. Activity recognition systems are designed to comply with ideal conditions and are of limited utility in realistic domains. To become real-world applicable, activity recognition systems must satisfy operational and quality requirements that pose complex challenges, most of which have been sparsely and vaguely investigated to date.
Classic activity recognition systems assume that the sensor setup remains identical during the lifelong use of the system. However, in users' daily life, sensors may fail, run out of battery, be misplaced or experience topological variations. These changes may lead to significant variations in the sensor measurements with respect to the default case. Consequently, activity recognition systems devised for ideal conditions may react in an undesired manner to imperfect, unknown or anomalous sensor data. This potentially translates into a partial or total malfunctioning of the activity recognition system.
In this thesis, novel expert systems are proposed to address the challenges of making activity recognition systems functional in real-world scenarios.
An innovative methodology, the hierarchical weighted classifier, that leverages the potential of multi-sensor configurations, is defined to overcome the effects of sensor failures and faults. This approach proves to be as valid as other standard activity recognition models in ideal conditions while outperforming them in terms of robustness to sensor failure and fault-tolerance. This methodology also shows outstanding capabilities to assimilate sensor deployment anomalies motivated by the user self-placement of the sensors. Furthermore, a novel multimodal transfer learning method that operates at runtime, with low overhead and without user or system designer intervention is developed. This approach serves to automatically translate activity recognition capabilities from an existing system to an untrained system even for different sensor modalities. This is of key interest to support sensor replacements as part of equipment maintenance, sensor additions in system upgrades and to benefit from sensors that happen to be available in the user environment. The potential of these advanced expert models leads to new research directions such as autonomous systems self-configuration, auto-adaptation and evolvability in activity recognition. Thus, this thesis opens-up a new range of opportunities for activity recognition systems to operate in real-world scenarios.
Work described in the following dissertation:
Banos, O.: Robust Expert Systems for more Flexible Real-World Activity Recognition. Ph.D. Thesis, University of Granada, Granada (SPAIN) (2014)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
Robust Expert Systems for more Flexible Real-World Activity Recognition
1. Robust Expert Systems for
more Flexible Real-World
Activity Recognition
Granada, Friday, April 25, 2014
Presented by: Oresti Baños
Supervised by: Miguel Damas, Héctor Pomares and Ignacio Rojas
Department of Computer Architecture and Computer Technology,
CITIC-UGR, University of Granada, SPAIN
4. Activity Recognition (AR)
• Activity recognition concept
“Recognize the actions and goals of one or more agents from a series of
observations on the agents' actions and the environmental conditions”
• Activity recognition process
4
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Phenomena
Human activity
(body motion)
Measurement
Sensing
(ambient/wearables)
Processing
Data adequation
and knowledge
inference Recognized
Activity
5. Wearable Activity Recognition
5
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
• Wearable activity recognition systems are ready!
The first system capable of fully
recognize your daily routine.
AtlasWearables (2014)
The simplest way to understand
your day and night.
Jawbone Up (2014)
The best activity tracker on the
market. Fitbit Force (2014)
The device that tracks your active
life and measures all kind of
activities. Nike Fuel (2014)
9. Thesis Motivation and Objectives
• Motivation:
“Create more advanced systems capable of handling real-world AR issues as well
as to incorporate more intelligent capabilities to transform experimental
prototypes into actual usable applications”
• Objectives:
– O1: “Investigate the tolerance of standard AR systems to unforeseen sensor
failures and faults, as well as contribute with an alternate approach to cope
with these technological anomalies” Fault-tolerance
– O2: “Research the robustness of standard AR systems to unforeseen variations
in the sensor deployment, as well as contribute with an alternate approach to
cope with these practical anomalies” Usability, Unobtrusiveness
– O3: “Study the capacity of standard AR systems to support unforeseen changes
in the sensor network, as well as contribute with an alternate approach to cope
with these topological variations” Self-configuration, auto-adaptation,
evolvability
9
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
10. Activity Recognition Process
10
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Phenomena
Human activity
(body motion)
Measurement
Sensing
(ambient/wearables)
Processing
Data adequation
and knowledge
inference Recognized
Activity
How does it work exactly?
11. Activity Recognition Process
11
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
The Activity
Recognition
Chain (ARC)
Phenomena
Human activity
(body motion)
Measurement
Sensing
(ambient/wearables)
Processing
Data adequation
and knowledge
inference Recognized
Activity
19. Tolerance of AR systems to
sensor faults and failures
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Objective: “Investigate the tolerance of standard AR systems to unforeseen
sensor failures and faults, as well as contribute with an alternate approach
to cope with these technological anomalies”
20. Problem Statement
20
SENSOR ERRORS
Are standard activity recognition systems
prepared to cope with sensor
technological anomalies?
Is it possible to keep the systems
functioning under the effects of sensor
errors?
Activity
recognition
process
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Body motion
sensing
Signal
processing and
reasoning
Recognition of
activities
22. Sensor Technological Anomalies in AR: Related Work
• Detection of sensor anomalies
– Sensor query (Rost06)
– Neighborhood data correlation
• Signal level (Yao10)
• Feature level (Ramanathan09)
• Reasoning (Rajasegarar07,
Ganeriwal08)
• Counteraction of sensor anomalies
– Data imputation (Uchida13)
– Sensor fusion (Sagha13)
S. Rost and H. Balakrishnan. Memento: A health monitoring system for wireless sensor
networks. In 3rd Annual IEEE Communications Society on Sensor and Ad Hoc
Communications and Networks, volume 2, pp. 575-584, 2006.
Y. Yao, A. Sharma, L. Golubchik, and R. Govindan. Online anomaly detection for sensor
systems: A simple and efficient approach. Performance Evaluation, 67(11):1059-1075,
November 2010.
N. Ramanathan, T. Schoellhammer, E. Kohler, K. Whitehouse, T. Harmon, and D. Estrin.
Suelo: human-assisted sensing for exploratory soil monitoring studies. In Proceedings of
the 7th ACM Conference on Embedded Networked Sensor Systems, pp. 197-210, 2009.
S. Rajasegarar, C. Leckie, M. Palaniswami, and J. C. Bezdek. Quarter sphere based
distributed anomaly detection in wireless sensor networks. In IEEE International
Conference on Communications, pp. 3864-3869, June 2007.
S. Ganeriwal, L. K. Balzano, and M. B. Srivastava. Reputationbased framework for high
integrity sensor networks. ACM Transaction on Sensor Networks, 4(3):1-37, June 2008.
R. Uchida, H. Horino, and R. Ohmura. Improving fault tolerance of wearable wearable
sensor-based activity recognition techniques. In Proceedings of the 2013 ACM
Conference on Pervasive and Ubiquitous Computing Adjunct Publication, pp. 633-644,
2013.
H. Sagha, H. Bayati, J. del R. Millan, and R. Chavarriaga. On-line anomaly detection and
resilience in classifier ensembles. Pattern Recognition Letters, 34(15):1916-1927, 2013
22
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
23. Sensor Failures in Classic AR Systems
• Single-sensor ARC (SARC)
23
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
24. • Single-sensor ARC (SARC)
24
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
If the sensor fails,
the complete system fails
Solution
Use more sensors for redundancy
(multi-sensor ARC or MARC)
Sensor Failures in Standard AR Systems
25. • Feature fusion multi-sensor ARC (FFMARC)
25
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Sensor Failures in Standard AR Systems
26. • Feature fusion multi-sensor ARC (FFMARC)
26
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
If a sensor fails,
the complete system fails
Solution
Independent ARCs
+ decision fusion
Sensor Failures in Standard AR Systems
27. • Decision fusion multi-sensor ARC (DFMARC)
27
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Sensor Failures in Standard AR Systems
28. • Decision fusion multi-sensor ARC (DFMARC)
28
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
If a sensor fails, the system is still
capable of functioning
but…
Is it capable of recognition?
Sensor Failures in Standard AR Systems
29. • Hierarchical decision (HD)
– Information from some sensors more valuable
than from others (e.g., body part for a certain
activity) Ranking of decisions
– Decisions mainly made on top (recognition
relies on a sensor or few sensors) Problem
when top-ranked sensors get unavailable
• Majority voting (MV)
– Equality scheme (all sensors have the same
importance) Fairness, decisiveness
– A plurality of weak decisors may prevail over
the rest Tyranny of the majority
29
Sensor Failures in Standard AR Systems
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
…
c11,c12,…,c1k
c1=φ(c11,c21,…, cM1),
c2=φ(c12,c22,…, cM2),
…
ck=φ(c1k,c2k,…, cMk)
c21,c22,…, c2k
cM1,cM2,…,cMk
…
…
…
…
…
…
c11,c12,…,c1k
c1=φ(c11,c21,…, cM1),
c2=φ(c12,c22,…, cM2),
…
ck=φ(c1k,c2k,…, cMk)
c21,c22,…, c2k
cM1,cM2,…,cMk
…
40. • Tolerance to sensor failures:
40
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Evaluation of the Tolerance to Sensor Technological Anomalies
Evaluated model:
- HWC
Parameters:
- Feature set: 10
feat.
- Classifier: KNN
Evaluation procedure:
- 10-fold CV
- 100 iterations
Legend:
H (hip), W (wrist),
A (arm), K (ankle),
T (thigh)
Baseline Accuracy
(Ideal conditions) =
96.34%
41. • Tolerance to sensor failures:
41
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Evaluation of the Tolerance to Sensor Technological Anomalies
1 missing sensor
Evaluated model:
- HWC
Parameters:
- Feature set: 10
feat.
- Classifier: KNN
Evaluation procedure:
- 10-fold CV
- 100 iterations
Legend:
H (hip), W (wrist),
A (arm), K (ankle),
T (thigh)
Baseline Accuracy
(Ideal conditions) =
96.34%
42. • Tolerance to sensor failures:
42
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Evaluation of the Tolerance to Sensor Technological Anomalies
1 missing sensor
2 missing sensors
Evaluated model:
- HWC
Parameters:
- Feature set: 10
feat.
- Classifier: KNN
Evaluation procedure:
- 10-fold CV
- 100 iterations
Legend:
H (hip), W (wrist),
A (arm), K (ankle),
T (thigh)
Baseline Accuracy
(Ideal conditions) =
96.34%
43. • Tolerance to sensor failures:
43
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Evaluation of the Tolerance to Sensor Technological Anomalies
1 missing sensor
2 missing sensors
3 missing sensors
4 missing sensors
Evaluated model:
- HWC
Parameters:
- Feature set: 10
feat.
- Classifier: KNN
Evaluation procedure:
- 10-fold CV
- 100 iterations
Legend:
H (hip), W (wrist),
A (arm), K (ankle),
T (thigh)
Baseline Accuracy
(Ideal conditions) =
96.34%
44. • Tolerance to sensor faults:
44
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Evaluation of the Tolerance to Sensor Technological Anomalies
Ideal case Dynamic range shortening
46. Conclusions
• Assuming a lifelong invariant sensor setup is unrealistic and may
lead to a malfunctioning of the activity recognition system
• Body-worn sensors are subject to faults (signal degradation) and
failures (absence of signal) normally unforeseen at design and
runtime
• Classic activity recognition approaches (SARC, FFMARC) are not
capable of dealing with sensor failures and are of limited utility
under the effect of sensor faults
• The proposed alternate model (HWC) renders similar performance
to standard activity recognition models in ideal conditions, proves
to be robust to sensor failures and a relevant tolerance to sensor
faults
46
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
47. Robustness of AR systems to
sensor deployment variations
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Objective: “Research the robustness of standard AR systems to unforeseen
variations in the sensor deployment, as well as contribute with an alternate
approach to cope with these practical anomalies”
48. Problem Statement
48
SENSOR DEPLOYMENT CHANGES
Are activity recognition systems flexible enough to
allow users to wear the sensors on their
own?
Is it possible to keep the systems
functioning under the effects of sensor
displacement?
Activity
recognition
process
Body motion
sensing
Signal
processing and
reasoning
Recognition of
activities
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
49. Sensor Displacement
• Categories of sensor displacement
– Static: position changes can remain static across the execution of many activity
instances, e.g. when sensors are attached with a displacement each day
– Dynamic: effect of loose fitting of the sensors, e.g. when embedded into clothes
• Sensor displacement new sensor position signal space change
• Sensor displacement effects depends on
– Original/end position and body part
– Activity/gestures/movements performed
– Sensor modality
49
Sensor displacement = rotation + translation
(angular displacement) (linear displacement)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
50. Sensor Displacement Effects
Changes in the signal
space propagates
through the activity
recognition chain (e.g.,
variations in the feature
space)
RCIDEAL LCIDEAL= LCSELF
50
RCSELF ≠ RCIDEAL
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
51. Sensor Displacement in AR: Related Work
• Features invariant to sensor displacement
– Heuristics (Kunze08)
– Genetic algorithm for feature selection
(Förster09a)
• Feature distribution adaptation
– Covariate shift unsupervised adaptation
(Bayati09)
– Online-supervised user-based calibration
(Förster09b)
• Classification (dis)similarity
– Output classifiers correlation (Sagha11)
K. Kunze and P. Lukowicz. Dealing with sensor displacement in motion-based
onbody activity recognition systems. In 10th international conference on
Ubiquitous computing, pp. 20–29, 2008.
K. Förster, P. Brem, D. Roggen, and G. Tröster. Evolving discriminative features
robust to sensor displacement for activity recognition in body area sensor
networks. In Intelligent Sensors, Sensor Networks and Information Processing
(ISSNIP), 2009 5th International Conference on, pp. 43–48, 2009.
H. Bayati, J. del R Millan, and R. Chavarriaga. Unsupervised adaptation to on-body
sensor displacement in acceleration-based activity recognition. In Wearable
Computers (ISWC), 2011 15th Annual International Symposium on, pp. 71–78,
June 2011.
K. Förster, D. Roggen, and G. Tröster. Unsupervised classifier self-calibration
through repeated context occurrences: Is there robustness against sensor
displacement to gain? In Proc. 13th IEEE Int. Symposium on Wearable Computers
(ISWC), pp. 77–84, 2009.
H. Sagha, J. R. del Millán, and R. Chavarriaga. Detecting and rectifying anomalies
in Opportunistic sensor networks 8th Int. Conf. on Networked Sensing Systems,
pp. 162-–167, 2011
51
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
53. Approaches to Investigate on Sensor Displacement
Synthetically
Modeled
Sensor
Displacement
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
53
54. Synthetically Modeled Sensor Displacement
• Sensor rotation Rotational noise (RN)
• Sensor translation Additive noise (AN)
• Examples:
54
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
𝑀 𝑅𝑁 =
𝑐 𝜃 𝑐 𝜓 − 𝑐 𝜙 𝑠 𝜓 + 𝑠 𝜙 𝑠 𝜃 𝑐(𝜓) 𝑠 𝜙 𝑠 𝜓 + 𝑐 𝜙 𝑠 𝜃 𝑐(𝜓)
𝑐 𝜃 𝑠 𝜓 𝑐 𝜙 𝑐 𝜓 + 𝑠 𝜙 𝑠 𝜃 𝑠(𝜓) −𝑠 𝜙 𝑐 𝜓 + 𝑐 𝜙 𝑠 𝜃 𝑠(𝜓)
− 𝑠 𝜃 𝑠 𝜙 𝑐 𝜃 𝑐 𝜙 𝑐 𝜃
𝑥 𝑟𝑜𝑡
𝑦 𝑟𝑜𝑡
𝑧 𝑟𝑜𝑡
= 𝑀 𝑅𝑁 ×
𝑥 𝑟𝑎𝑤
𝑦𝑟𝑎𝑤
𝑧 𝑟𝑎𝑤
𝑥𝑡𝑟
𝑦𝑡𝑟
𝑧𝑡𝑟
= 𝑇𝐴𝑁 +
𝑥 𝑟𝑎𝑤
𝑦𝑟𝑎𝑤
𝑧 𝑟𝑎𝑤
𝑇𝐴𝑁 = 𝜇 𝐴𝑁 + 𝜎𝐴𝑁
2 + 𝑟𝑎𝑛𝑑_𝑛𝑜𝑟𝑚𝑎𝑙_𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 (𝜇 𝐴𝑁=0)
0 1 2 3
-2
0
2
Acceleration(g)
Time (s)
Original
0 1 2 3
-2
0
2
Time (s)
RN
=15º
0 1 2 3
-2
0
2
Time (s)
RN
=90º
0 1 2 3
-2
0
2
Time (s)
AN
=0.1g
0 1 2 3
-2
0
2
Time (s)
AN
=0.5g
Original RN
=15º RN
=90º AN
=0.1g AN
=0.5g
0 1 2 3
-2
0
2
Acceleration(g)
Time (s)
Original
0 1 2 3
-2
0
2
Time (s)
RN
=15º
0 1 2 3
-2
0
2
Time (s)
RN
=90º
0 1 2 3
-2
0
2
Time (s)
AN
=0.1g
0 1 2 3
-2
0
2
Time (s)
AN
=0.5g
0 1 2 3
-2
0
2
Acceleration(g)
Time (s)
Original
0 1 2 3
-2
0
2
Time (s)
RN
=15º
0 1 2 3
-2
0
2
Time (s)
RN
=90º
0 1 2 3
-2
0
2
Time (s)
AN
=0.1g
0 1 2 3
-2
0
2
Time (s)
AN
=0.5g
Walking Sitting
Proposed in: H. Sagha, J. R. del Millán, and R. Chavarriaga. Detecting and rectifying anomalies in
Opportunistic sensor networks. 8th Int. Conf. on Networked Sensing Systems, pp. 162 – 167, 2011
55. • Benchmark dataset:
– MIT Activities of Daily Living Dataset*
• 9 acts
• 5 biaxial accelerometers
• 20 subjects (17-48 years old)
• Out-of-lab
• Experimental setup:
55
Evaluation of the Robustness to Sensor Displacement (Synthetic)
* http://architecture.mit.edu/house_n/data/Accelerometer/BaoIntille.htm
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
5 biaxial
accelerometers
(limbs and
trunk)
LP Elliptic Filter
(Fc = 20Hz)
6 seconds
sliding window
10 feat. set
KNN (as base
classifier)
56. HWC (multi-sensor)FFMARC (multi-sensor)SARC (single sensor)
• Performance drop under the effects of sensor rotation and translation:
56
Evaluation of the Robustness to Sensor Displacement (Synthetic)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
RotationTranslation
57. • Performance drop under the effects of sensor rotation and translation:
HWC (multi-sensor)FFMARC (multi-sensor)SARC (single sensor)
57
Evaluation of the Robustness to Sensor Displacement (Synthetic)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
30%
RotationTranslation
23% 3%
8%
8%
20%
15% 25% 5%
15%
1%
4%
58. HWC (multi-sensor)FFMARC (multi-sensor)SARC (single sensor)
• Performance drop under the effects of sensor rotation and translation:
58
Evaluation of the Robustness to Sensor Displacement (Synthetic)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
70%
RotationTranslation
50%
20%
55%
6%
15%3%
8%
8%
20%
15%
45%
75%
5%
15% 43%
30%
1%
4%
4%
10%25%
30%
23%
59. Approaches to Investigate on Sensor Displacement
Realistic
Sensor
Displacement
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
59
60. • No dataset for studying the effects of sensor displacement!
• Observe
– Variability introduced with respect to the ideal setup when the sensors are
self-placed by the users
– Effects of large sensor displacements (extreme de-positioning)
• Scenarios
– Ideal-placement
– Self-placement
– Induced-displacement
Implementing Realistic Sensor Displacement
Ideal Self Induced
60
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
NEW DATASET*
(REALDISP)
*Freely available at:
www.ugr.es/~oresti/datasets
61. REALDISP Dataset: Study Setup
• Cardio-fitness room
• 9 IMUs (9DoF) ACC, GYR, MAG
• Laptop data storage and labeling*
• Camera offline data validation
• 17 volunteers (22-37 years old)
*Annotation tool:
http://crnt.sourceforge.net/CRN_Toolbox/Home.html 61
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
68. Conclusions
• Classic activity-aware systems assume a predefined sensor deployment that
further remains unchanged during runtime, which are not lifelike assumptions
• Body-worn inertial sensors are subject to deployment changes (displacement)
in real-world contexts, potentially leading to signal variations with respect to
ideal patterns
• Activity recognition systems proves to be more sensitive to sensor rotations
than translations, specially when located on body parts of reduced mobility
• Standard models (SARC,FFMARC) suffer from a critical performance worsening
when the sensors are largely depositioned or self-placed by the users
• The HWC significantly outperforms the tolerance of standard activity
recognition models (up to 30%), effectively showing outstanding capabilities
to assimilate the changes introduced during the self-placement of the sensors
and to moderately overcome the situation of largely depositioned sensors
68
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
69. Supporting AR systems network
changes: instruction of
newcomer sensors
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Objective: “Study the capacity of standard AR systems to support
unforeseen changes in the sensor network, as well as contribute with an
alternate approach to cope with these topological variations”
70. Problem Statement
70
SENSOR INFRASTRUCTURE CHANGES
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Collect a
training
dataset
Train and test
the model
The AR system
is “ready”
Do we need to collecta new dataset each time
the sensor topology changes?
Is it possible to leverage the knowledge of a
functional system to instructa system to
operate on a newcomer sensor?
Activity
recognition
system design
72. Transfer learning
Instruction of Newcomer Sensors
72
Teacher
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Classic approach
Limitations:
- Predefined setup and deployments
- System designer involvement
- User/s involvement
Learner
“Mechanism, ability or means to
recognize and apply knowledge and
skills learned in previous tasks or
domains to novel tasks or domains”
Collection of a new dataset
for each possible scenario
73. Transfer Learning in AR: Related Work
• Transfer between wearable sensors
– Translation of locomotion recognition
capabilities (Calatroni11)
• Model parameters
• Labels
• Transfer between ambient sensors
– Translation among smart homes through
meta-featuring (van Kasteren10)
• Common meta-feature space
• Limitations
– Long time scales operation
– Incomplete transfer
– Difficult transfer across modalities
A. Calatroni, D. Roggen, and G. Tröster, “Automatic transfer of activity recognition
capabilities between body-worn motion sensors: Training newcomers to recognize
locomotion,” in Proc. 8th Int Conf on Networked Sensing Systems, 2011.
T. van Kasteren, G. Englebienne, and B. Kröse, “Transferring knowledge of activity
recognition across sensor networks,” in Proc. 8th Int. Conf on Pervasive Computing, 2010.
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
73
74. Multimodal Transfer Methods
• System identification (signal level)
• Transfer methods (reasoning level)
Ψ𝐴→𝐵 𝑡
Sensor
Domain
A
Sensor
Domain
B0 1 2 3
-0.5
0
0.5
1
1.5
Time (s)
Acceleration(G)
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of activity models
(features + labels, classification models)
Transfer of activity templates
(patterns + labels)
74
75. Transfer of Activity Templates
𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
• Transfer of the recognition capabilities of an existing source system (S) that
operates on activity templates (patterns) to an untrained target system (T)
that lacks from these capabilities
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
75
76. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
Coexistence… (T)
0 20 40
-1
0
1
2
Time (s)
Position(m)
0 20 40
-1
0
1
2
Time (s)Acceleration(G)
Transfer of Activity Templates
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
(1) Both systems coexists during a certain period of time
76
77. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
(2) A mapping function between source and target domains is discovered
through system identification (MIMO model)
Transfer of Activity Templates
Ψ𝑆→𝑇 𝑡 : 𝑋𝑆(𝑡) → 𝑋 𝑇(𝑡) ≈ 𝑋 𝑇(𝑡)
77
78. System S (source domain) System T (target domain)
Signal
level
Reasoning
level
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
Ψ𝑆→𝑇 𝑡 : 𝑋𝑆(𝑡) → 𝑋 𝑇(𝑡) ≈ 𝑋 𝑇(𝑡)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
(3) The activity templates are translated from source to target domain
Transfer of Activity Templates
78
79. Signal
level
Reasoning
level
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
Ψ𝑆→𝑇 𝑡 : 𝑋𝑆(𝑡) → 𝑋 𝑇(𝑡) ≈ 𝑋 𝑇(𝑡)
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Templates
(3) The activity templates are translated from source to target domain
System S (source domain) System T (target domain)
L3
79
80. System S (source domain) System T (target domain)
Signal
level
Reasoning
level
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
Ψ𝑆→𝑇 𝑡 : 𝑋𝑆(𝑡) → 𝑋 𝑇(𝑡) ≈ 𝑋 𝑇(𝑡)
0 1 2 3
-0.5
0
0.5
1
1.5
Time (s)
Acceleration(G)
^X
^Y
^Z
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Templates
(3) The activity templates are translated from source to target domain
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
L3 L3
80
81. System S (source domain) System T (target domain)
Signal
level
Reasoning
level
L1 L2 L3
Ψ𝑆→𝑇 𝑡 : 𝑋𝑆(𝑡) → 𝑋 𝑇(𝑡) ≈ 𝑋 𝑇(𝑡)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Templates
• System identification
– 3) The activity templates are translated from source to target domain
0
0.5
1
1.5
Acceleration(G)
X
Y
Z
^X
^Y
^Z
81
82. System S (source domain) System T (target domain)
Signal
level
Reasoning
level
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
Ψ𝑆→𝑇 𝑡 : 𝑋𝑆(𝑡) → 𝑋 𝑇(𝑡) ≈ 𝑋 𝑇(𝑡)
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
L1 L2 L3
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Templates
(4) Once the templates have been translated, the target system is ready for
activity detection
82
83. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
L1 L2 L3
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
L1 L2 L3
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Templates
(4) Once the templates have been translated, the target system is ready for
activity detection
Instruction
completed!
83
84. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Models
• Transfer of the recognition capabilities of an existing source system (S) that
operates on activity models (features + classification model) to an untrained
target system (T) that lacks from these capabilities
84
85. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
Coexistence… (T)
0 20 40
-1
0
1
2
Time (s)
Position(m)
0 20 40
-1
0
1
2
Time (s)
Acceleration(G)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Models
(1) Both systems coexists during a certain period of time
85
86. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
Ψ 𝑇→𝑆 𝑡 : 𝑋 𝑇(𝑡) → 𝑋𝑆(𝑡) ≈ 𝑋𝑆(𝑡)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Models
(2) A mapping function between target and source domains is discovered
through system identification (MIMO model)
86
87. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
Ψ 𝑇→𝑆 𝑡 : 𝑋 𝑇(𝑡) → 𝑋𝑆(𝑡) ≈ 𝑋𝑆(𝑡)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Models
(3) The source activity models are translated to the target domain so both use
the same activity models
87
88. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
Ψ 𝑇→𝑆 𝑡 : 𝑋 𝑇(𝑡) → 𝑋𝑆(𝑡) ≈ 𝑋𝑆(𝑡)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Models
(3) The source activity models are translated to the target domain so both use
the same activity models; these activity models also define the target activity
recognition system
88
89. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
Ψ 𝑇→𝑆 𝑡 : 𝑋 𝑇(𝑡) → 𝑋𝑆(𝑡) ≈ 𝑋𝑆(𝑡)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Models
(4) The target system continuously translate its signals into the source domain to
operate on the transferred recognition system
0 1 2 3 4
-0.5
0
0.5
1
1.5
X
Y
Z
89
90. 𝑋𝑆(𝑡) 𝑋 𝑇(𝑡)
System S (source domain) System T (target domain)
Signal
level
Reasoning
level
Ψ 𝑇→𝑆 𝑡 : 𝑋 𝑇(𝑡) → 𝑋𝑆(𝑡) ≈ 𝑋𝑆(𝑡)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Transfer of Activity Models
(4) The target system continuously translate its signals into the source domain to
operate on the transferred recognition system; since then it is ready for activity
detection
Instruction
completed!
0 1 2 3 4
-1
0
1
2
^X
^Y
^Z
90
91. Evaluation of Multimodal Transfer
• Models validation
– Transfer between IMU and IMU (Identical Domain Transfer)
– Transfer between Kinect and IMU (Cross Domain Transfer)
91
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
L1 L2 L3
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4 6
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 2 4
-1
0
1
2
Time (s)
Position(m)
X
Y
Z
0 1 2 3
-1
0
1
2
Time (s)
Position(m)
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
L1 L2 L3
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 5 10
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
0 2 4
-1
0
1
2
Time (s)
Acceleration(G)
^X
^Y
^Z
Transfer of Activity Templates
Transfer of Activity Models
0 1 2 3 4
-1
0
1
2
^X
^Y
^Z
92. Multimodal Kinect-IMU Dataset: Study Setup
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
*Freely available at: www.ugr.es/~oresti/datasets 92
93. Multimodal Kinect-IMU Dataset: Study Setup
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
MTx XSENS IMUs
- 3D ACC +
(3D GYR, 3D MAG,
4D QUA)
- Sampling rate 30Hz
Applications
93Xsens data logger http://crnt.sourceforge.net/CRN_Toolbox/References.html
94. Multimodal Kinect-IMU Dataset: Study Setup
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
MICROSOFT KINECT
- RGB cam + IR cam + IR led
- Depth map (0.5-6m)
- 15 joints skeleton tracking
- 3D position
- Tracking range (1.2-3.5m)
- Sampling rate 30Hz
Applications
94
Kinect data logger http://code.google.com/p/qtkinectwrapper/
95. Multimodal Kinect-IMU Dataset: Scenarios
Geometric Gestures (HCI)
48 instances per gesture
Other scenarios were also collected as part of this dataset (more info at www.ugr.es/~oresti/datasets) 95
96. Multimodal Kinect-IMU Dataset: Scenarios
Geometric Gestures (HCI) Idle (Background)
~5 min of data48 instances per gesture
Other scenarios were also collected as part of this dataset (more info at www.ugr.es/~oresti/datasets) 96
97. Transfer between IMU and IMU
• Analyzed transfers
– Transfer of Activity Templates and Activity Models from:
• RLA (3D acceleration) to RUA (3D acceleration)
• RUA (3D acceleration) to RLA (3D acceleration)
• RUA (3D acceleration) to BACK (3D acceleration)
• BACK (3D acceleration) to RUA (3D acceleration)
• RLA (3D acceleration) to BACK (3D acceleration)
• BACK (3D acceleration) to RLA (3D acceleration)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
97
98. Evaluation of Transfer between IMU and IMU
• Mapping:
– Model MIMO3x3 mapping with 10 tap delay
– Types
• Problem-domain mapping (PDM)
• Gesture-specific mapping (GSM)
• Unrelated-domain mapping (UDM)
– Learning 100 samples (~3.3s)
• Activity recognition model:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
Triaxial
acceleration
(IMU)
No
preprocessing
(raw data)
Instance based
segmentation
FS=max,min
KNN (standard
classifier)
98
99. Evaluation of Transfer between IMU and IMU
• Transfer of Activity Templates:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
99BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
LA=lower arm
UA=upper arm
B=back
100. Evaluation of Transfer between IMU and IMU
• Transfer of Activity Templates:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
100BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
LA=lower arm
UA=upper arm
B=back
<1% <3%
101. Evaluation of Transfer between IMU and IMU
• Transfer of Activity Templates:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
101BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
LA=lower arm
UA=upper arm
B=back
<1% <3%
35%
15% 17%
28% 35%
28%
10% 10%
102. Evaluation of Transfer between IMU and IMU
• Transfer of Activity Templates:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
102BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
LA=lower arm
UA=upper arm
B=back
<1% <3%
12%
20%
35%
55%
15% 17%
28% 35%
60%
28%
30%
50%
10% 10%
103. Evaluation of Transfer between IMU and IMU
• Transfer of Activity Models:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
103BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
LA=lower arm
UA=upper arm
B=back
104. Evaluation of Transfer between IMU and IMU
• Transfer of Activity Models:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
104BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
LA=lower arm
UA=upper arm
B=back
105. Transfer between Kinect and IMU
• Analyzed transfers
– Transfer of Activity Templates (Kinect to IMU) :
• HAND (3D position) RLA (3D acceleration)
• HAND (3D position) RUA (3D acceleration)
• HAND (3D position) BACK (3D acceleration)
– Transfer of Activity Models (IMU to Kinect):
• RLA (3D acceleration) HAND (3D position)
• RUA (3D acceleration) HAND (3D position)
• BACK (3D acceleration) HAND (3D position)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
105
106. Evaluation of Transfer between Kinect and IMU
• Mapping:
– Model MIMO3x3 mapping with 10 tap delay
– Types
• Problem-domain mapping (PDM)
• Gesture-specific mapping (GSM)
• Unrelated-domain mapping (UDM)
– Learning 100 samples (~3.3s)
• Activity recognition model:
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
106
Triaxial
acceleration
(IMU) / Triaxial
position
(KINECT)
No
preprocessing
(raw data)
Instance based
segmentation
FS=max,min
KNN (standard
classifier)
107. Evaluation of Transfer between Kinect and IMU
• Transfer of Activity Templates (From Kinect to IMU)
• Transfer of Activity Models (From IMU to Kinect)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
107BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
RLA= right lower arm
RUA= right upper arm
BACK=back
KINECT=hand
108. Evaluation of Transfer between Kinect and IMU
• Transfer of Activity Templates (From Kinect to IMU)
• Transfer of Activity Models (From IMU to Kinect)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
108BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
RLA= right lower arm
RUA= right upper arm
BACK=back
KINECT=hand
<4% <4%
<8%<6%
109. Evaluation of Transfer between Kinect and IMU
• Transfer of Activity Templates (From Kinect to IMU)
• Transfer of Activity Models (From IMU to Kinect)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
109BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
RLA= right lower arm
RUA= right upper arm
BACK=back
KINECT=hand
<4% <4%
<8%<6%
30%
45%
35%
60%
110. Evaluation of Transfer between Kinect and IMU
• Transfer of Activity Templates (From Kinect to IMU)
• Transfer of Activity Models (From IMU to Kinect)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
110BS=baseline source | BT=baseline target | PDM=problem-domain mapping | GSM=gesture-specific mapping | UDM=unrelated-domain mapping
RLA= right lower arm
RUA= right upper arm
BACK=back
KINECT=hand
<4%
35%
50%
<4%
<8%<6%
30%
45%
35%
60%
55%
30%35%35%
111. Evaluation of Transfer between Kinect and IMU
• Transfer of Activity Templates (From Kinect to IMU)
• Transfer of Activity Models (From IMU to Kinect)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
From Kinect to IMU (RLA) From IMU (RLA) to Kinect
FS1=mean
FS2=max,min
11130 samples = 1 s
112. Evaluation of Transfer between Kinect and IMU
• Transfer of Activity Templates (From Kinect to IMU)
• Transfer of Activity Models (From IMU to Kinect)
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
From Kinect to IMU (RLA) From IMU (RLA) to Kinect
FS1=mean
FS2=max,min
11230 samples = 1 s
25% 20%
113. Conclusions
• Classical training procedures are not practical to instruct newcomer sensors
in dynamically varying and evolvable activity recognition setups
• A novel multimodal transfer learning model is proposed to translate the
recognition capabilities of an existing system to a new untrained system, at
runtime and without expert or user intervention
• As few as a single gesture (≈3 seconds) of data is enough to learn a mapping
model that captures the underlying relation between systems of identical or
different modality
• The transfer between IMUs across close-by limbs achieves a recognition
accuracy superior to 97% (>2% below baseline), and 95% (>4% below
baseline) for the transfer between Kinect and IMU, independently of the
direction of the transfer
• Low-variance data unrelated to the activities of interest can be also used to
learn a mapping, albeit with more data 113
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
114. Conclusions and future work
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
115. Contributions
• Identification of the requirements and challenges posed by AR systems in real-
world conditions
• Evaluation of the tolerance of standard AR systems to sensor technological
anomalies, particularly sensor failures and faults
• Definition and development of a novel model, so-called HWC, to overcome the
effects of sensor failures and faults. Evaluation of the robustness of the proposed
HWC model to the effects of sensor failures and faults
• Evaluation of the tolerance of standard AR systems to sensor deployment
variations, particularly static and dynamic sensor displacements
• Evaluation of the robustness of the proposed HWC model to the effects of sensor
displacements
• Definition, development and validation of a novel multimodal transfer learning
method that operates at runtime, with low overhead and without user or system
designer intervention
115
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
116. Contributions
• Collection and curation of an innovative benchmark dataset to investigate the
effects of sensor displacement, introducing the concept of ideal-placement,
self-placement and induced-displacement. This dataset includes a wide range
of physical activities, sensor modalities and participants. Apart from
investigating sensor displacement, the dataset lend itself for benchmarking
activity recognition techniques in ideal conditions. The dataset is publicly
available to the research community at http://www.ugr.es/~oresti/datasets
• Collection and curation of a novel multimodal dataset to investigate transfer
learning among ambient sensing and wearable sensing systems. The dataset
could be also used for gesture spotting and continuous activity recognition.
The dataset is publicly available to the research community at
http://www.ugr.es/~oresti/datasets
116
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
117. Selected Publications
• International Journals (SCI-indexed)
– Banos, O., Toth M. A., Damas, M., Pomares, H., Rojas, I. Dealing with the effects of
sensor displacement in wearable activity recognition. Sensors, MDPI (2014) [Under
review]
– Banos, O., Damas, M., Guillen, A., Herrera, L.J., Pomares, H., Rojas, I. Multi-sensor
fusion based on asymmetric decision weighting for robust activity recognition.
Neural Processing Letters, Springer (2014) [Under review]
– Banos, O., Galvez, J. M., Damas, M., Pomares, H., Rojas, I. Window size impact in
activity recognition. Sensors, MDPI, vol. 14, no. 4, pp. 6474-6499 (2014)
– Banos, O., Damas, M., Pomares, H., Rojas, F., Delgado-Marquez, B., Valenzuela, O.
Human activity recognition based on a sensor weighting hierarchical classifier. Soft
Computing, Springer, vol. 17, pp. 333-343 (2013)
– Banos, O., Damas, M., Pomares, H., Rojas, I. On the Use of Sensor Fusion to Reduce
the Impact of Rotational and Additive Noise in Human Activity Recognition. Sensors,
MDPI, vol. 12, no. 6, pp. 8039-8054 (2012)
– Banos, O., Damas, M., Pomares, H., Prieto, A., Rojas, I.: Daily Living Activity
Recognition based on Statistical Feature Quality Group Selection. Expert Systems
with Applications, Elsevier, vol. 39, no. 9, pp. 8013-8021 (2012)
117
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
118. Selected Publications
• Book chapters
– Banos, O., Toth M. A., Damas, M., Pomares, H., Rojas, I. Amft, O.: Evaluation of inertial sensor
displacement effects in activity recognition systems. Science and Supercomputing in Europe
(Information & Communication Technologies), HPC-Europe 2 (2013) ISBN: 978-84-338-5400-1
• Conference papers
– Banos, O., Damas, M., Pomares, H., Rojas, I.: Handling displacement effects in on-body sensor-
based activity recognition. In: Proceedings of the 5th International Work-conference on
Ambient Assisted Living an Active Ageing (IWAAL 2013), San Jose, Costa Rica, December 2-6,
(2013) [BEST PAPER AWARD]
– Banos, O., Damas, M., Pomares, H., Rojas, I.: Activity recognition based on a multi-sensor
meta-classifier. In: Proceedings of the 2013 International Work Conference on Neural
Networks (IWANN 2013), Tenerife, June 12-14, (2013)
– Banos, O., Toth, M. A., Damas, M., Pomares, H., Rojas, I., Amft, O.: A benchmark dataset to
evaluate sensor displacement in activity recognition. In: Proceedings of the 14th International
Conference on Ubiquitous Computing (Ubicomp 2012), Pittsburgh, USA, September 5-8,
(2012)
118
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
119. Selected Publications
• Conference papers (cont.)
– Banos, O., Calatroni, A., Damas, M., Pomares, H., Rojas, I., Troester, G., Sagha, H., Millan, J. del
R., Chavarriaga, R., Roggen, D.: Kinect=IMU? Learning MIMO Signal Mappings to Automatically
Translate Activity Recognition Systems Across Sensor Modalities. In: Proceedings of the 16th
annual International Symposium on Wearable Computers (ISWC 2012), Newcastle, United
Kingdom, June 18-22 (2012)
– Banos, O., Damas, M., Pomares, H., Rojas, I.: Human multisource activity recognition for AAL
problems. In: Proceedings of the 5th International Symposium on Ubiquitous Computing and
Ambient Intelligence (UCAmI 2011), Riviera Maya, Mexico, December 5-9, (2011)
– Banos, O., Damas, M., Pomares, H., Rojas, I.: Recognition of Human Physical Activity based on
a novel Hierarchical Weighted Classification scheme. In: Proceedings of the 2011 International
Joint Conference on Neural Networks (IJCNN 2011), IEEE, San Jose, California, July 31-August
5, (2011)
– Banos, O., Pomares, H., Rojas, I.: Ambient Living Activity Recognition based on Feature-set
Ranking Using Intelligent Systems. In: Proceedings of the 2010 International Joint Conference
on Neural Networks (IJCNN 2010), IEEE, Barcelona, July 18-23, (2010)
119
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS
120. Future Work
• Collection of new large standard
datasets
• Dynamic reconfiguration of the HWC
• Self-adaptive HWC
• Tolerance to other sensor
technological and topological
anomalies
• Multiple trainers and complex
modalities in transfer learning
• Integration in commercial systems
and end-user applications
120
INTRODUCTION TECHNOLOGICAL ANOMALIES DEPLOYMENT VARIATIONS NETWORK CHANGES CONCLUSIONS