{"id":17354,"date":"2020-01-02T07:43:53","date_gmt":"2020-01-02T06:43:53","guid":{"rendered":"https:\/\/www.inovex.de\/blog\/?p=17354"},"modified":"2022-12-02T09:11:10","modified_gmt":"2022-12-02T08:11:10","slug":"recognizing-assessing-recurrent-human-activity-with-wearable-sensors","status":"publish","type":"post","link":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/","title":{"rendered":"Recognizing &#038; Assessing Recurrent Human Activity with Wearable Sensors"},"content":{"rendered":"<p>The visions created by the <em>Internet of Things<\/em>, which encompass the seamless embedding of the virtual world into daily human life have become reality by now. In that context, the ongoing miniaturization of wearables, the ubiquitous availability of capable and mobile computation devices, and the fast progress within the domain of machine learning accelerated the recognition and analysis of human activity on basis of motion sensor information. Typical use cases are, e.g. controlling devices with gestures, supporting and monitoring of patients in a medical context or the tracking and estimation of physical exercises.<\/p>\n<p>Related to that, the following article focuses on the analysis of human motion in sports in order to not only detect and identify different physical activities, but also to analyze them regarding their quality and correctness. Therefore, a distributed sensor system called SensX which is capable of capturing and analyzing human motion is presented. Moreover, an overview across a sequential process chain for analyzing multi-dimensional time-series with algorithms of supervised machine learning is provided. Afterwards, the application of this concept is evaluated by automatically recognizing and assessing the quality of conduction of different physical exercises. Finally the performance of two different approaches for segmenting recurrent motion events is examined.<!--more--><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-custom ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\"><p class=\"ez-toc-title\" style=\"cursor:inherit\"><\/p>\n<\/div><nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Tracking-Human-Motion\" >Tracking Human Motion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#A-Concept-for-Analyzing-and-Assessing-Human-Motion\" >A Concept for Analyzing and Assessing Human Motion<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#1-Tracking\" >(1) Tracking<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#2-Preprocessing\" >(2) Preprocessing<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#3-Segmentation\" >(3) Segmentation<\/a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Peak-based-Segmentation-with-Static-Window-Sizes\" >Peak-based Segmentation with Static Window Sizes<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Segmentation-with-Extrema-Fingerprints-and-an-Adaptive-Window-Size\" >Segmentation with Extrema Fingerprints and an Adaptive Window Size<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#4-Feature-Engineering\" >(4) Feature-Engineering<\/a><ul class='ez-toc-list-level-4' ><li class='ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Inspecting-and-Labeling-the-Dataset\" >Inspecting and Labeling the Dataset<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-4'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Building-compact-feature-vectors\" >Building compact feature vectors<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#5-Classification\" >(5) Classification<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Evaluation\" >Evaluation<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-13\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Qualitative-Assessment-of-Human-Activities\" >Qualitative Assessment of Human Activities<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-14\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Static-vs-Adaptive-Segmentation\" >Static vs. Adaptive Segmentation<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-15\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Runtime-During-Training-and-Classification\" >Runtime During Training and Classification<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-16\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#Summary-and-Outlook\" >Summary and Outlook<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Tracking-Human-Motion\"><\/span>Tracking Human Motion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>In general, methods to capture human motion can be divided into two separate categories, which here are referenced as indirect and as direct capturing.<\/p>\n<p>Indirect capturing is realized by tracking probands with non-contact sensors, such as cameras or depth sensors [1][2][3]. Advantages of such a setup are the holistic tracking of the human body (if more than one sensor is used), the possibility to determine exact distances and angles of extremities, and a potential for real time computation due to the scalability of computational power in a stationary sensor setup. In contrast, such a complex and stationary setup leads to a lack of mobility and is only applicable in laboratory conditions, which makes it inappropriate for a profound analysis of a variety of physical and sportive outdoor activities. This disadvantage becomes even more serious when multiple sensors must be added to avoid masking effects and to track all extremities of the human body.<\/p>\n<p>In contrast to that, direct capturing is conducted by using wearable sensors, which are directly placed on the body of a proband for motion tracking. Examples for this approach are <em>xsens<\/em>, <em>EnFlux<\/em> as well as <em>SensX<\/em>, the latter of which is presented in the scope of this article [4][5][6][10]. Advantages of such body-worn sensor systems are the freedom of movement during physical activity as well as tracking body motion without masking effects. A disadvantage can be the a more coarse-grained motion capturing, as every limb which is about to be tracked needs to be monitored by an individual sensor module.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"A-Concept-for-Analyzing-and-Assessing-Human-Motion\"><\/span>A Concept for Analyzing and Assessing Human Motion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>The basis for the following remarks is the <em>SensX<\/em> sensor system [6][10]. It implements the tracking of human motion as well as its analysis and assessment in real-time within mobile scenarions by utilizing body-worn inertial sensors. Therefore, it loosely leans on ideas of an activity recognition chain as proposed by Bulling et al. [8]. The SensX concept consists of two main layers as depicted in Figure 1: The hardware layer and the software layer, which are wrapped around a process chain implementing mechanisms of supervised machine learning.<\/p>\n<figure id=\"attachment_17359\" aria-describedby=\"caption-attachment-17359\" style=\"width: 1024px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17359 size-large\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview-1024x81.png\" alt=\"SensX architecture\" width=\"1024\" height=\"81\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview-1024x81.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview-300x24.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview-768x61.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview-1536x122.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview-400x32.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview-360x29.png 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/10\/sensx-logical-layer-process-chain_overview.png 1637w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-17359\" class=\"wp-caption-text\">Figure 1: Overview over the proposed <em>SensX<\/em> architecture for tracking and analyzing human motion, encompassing the hardware and the software layers.<\/figcaption><\/figure>\n<p>The hardware layer is responsible for organizing the <em>1) Tracking<\/em> of raw sensor information and consists of four external sensor units and one central computation unit. The software layer encompasses all logical steps to create knowledge from raw sensor data, namely the <em>2) Preprocessing<\/em>, the <em>3) Segementation<\/em>, the <em>4) Feature-Engineering<\/em> and finally the <em>5) Classification<\/em>. Figure 2 depicts the underlying steps of the implemented activity recognition chain with all technical details, which will be referenced in the following sections.<\/p>\n<figure id=\"attachment_17822\" aria-describedby=\"caption-attachment-17822\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17822 size-large\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/acr_technical-1024x303.png\" alt=\"Technical overview across the individual steps of the proposed activity recognition and assessment chain.\" width=\"1024\" height=\"303\" \/><figcaption id=\"caption-attachment-17822\" class=\"wp-caption-text\">Figure 3: Technical overview over the individual steps of the proposed activity recognition and assessment chain.<\/figcaption><\/figure>\n<h3><span class=\"ez-toc-section\" id=\"1-Tracking\"><\/span>(1) Tracking<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Within the <em>SensX<\/em> architecture, the tracking of human motion is completely organized in the hardware layer. Therefore, four external, wearable <em>CPRO<\/em> sensor units by <em>mbientlab<\/em> with an actual sampling rate of <code><em>40Hz<\/em><\/code> and one Android smartphone functioning as a central computation and sensing unit with a sensor sampling rate of <code><em>100Hz<\/em><\/code> are used [7].<\/p>\n<figure id=\"attachment_17681\" aria-describedby=\"caption-attachment-17681\" style=\"width: 300px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17681\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/chadly-sensx-973x1024.png\" alt=\"SensX sensor system worn by a study participant.\" width=\"300\" height=\"316\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/chadly-sensx-973x1024.png 973w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/chadly-sensx-285x300.png 285w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/chadly-sensx-768x808.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/chadly-sensx-400x421.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/chadly-sensx-360x379.png 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/chadly-sensx.png 1290w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><figcaption id=\"caption-attachment-17681\" class=\"wp-caption-text\">Figure 3: The SensX sensor system worn by a study parti<\/figcaption><\/figure>\n<p>The tracked sensor data encompasses acceleration and rotation information for the dimensions <code><em>X<\/em><\/code>, <code><em>Y<\/em><\/code>, and <code><em>Z<\/em><\/code>.\u00a0This data resembles the incoming time-series depicted in Figure 2, <em>(1) Tracking<\/em> and consists of <code><em>n=30<\/em><\/code> individual signals <code>S<\/code> used as input for the next step <em>(2) Preprocessing<\/em> within the proposed process chain. The potential sampling rate of the external sensor units is much higher, but due to the fact they are all communicating with the central computation unit through only one <em>Bluetooth<\/em> channel, it is necessary to split the available bandwidth for data transfer. This leads to a smaller actual data rate achieved by the external sensor units.<\/p>\n<p>Figure 3 shows the <em>SensX<\/em> sensor system worn by a study participant: The four external units are applied to the body&#8217;s extremities, while the central computation unit is fastened on the chest with a flexible harness.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"2-Preprocessing\"><\/span>(2) Preprocessing<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<figure id=\"attachment_17707\" aria-describedby=\"caption-attachment-17707\" style=\"width: 350px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17707\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing-300x225.png\" alt=\"Figure 4: Preprocessing of an acceleration signal with a Butterworth low pass filter.\" width=\"350\" height=\"262\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing-300x225.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing-1024x768.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing-768x576.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing-1536x1152.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing-400x300.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing-360x270.png 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/preprocessing.png 1890w\" sizes=\"auto, (max-width: 350px) 100vw, 350px\" \/><figcaption id=\"caption-attachment-17707\" class=\"wp-caption-text\">Figure 4: Preprocessing of an acceleration signal with a Butterworth low pass filter.<\/figcaption><\/figure>\n<p>Commonly, there are various well-known issues with time-series acquired by sensors, which complicate their analysis significantly. Examples are the occurence of ambient noise, environmental influences such as dependence on temperature or moisture, or even the breakdown of sensor devices. In order to address such noise within signals or short time measurement failures as well as to simplify the shape of a signal, filters can be applied.<\/p>\n<p>In step <em>(2) Preprocessing<\/em> of the process chain implemented by <em>SensX<\/em>, which is also depicted in Figure 2, the most important task to implement is the filtering of the <code><em>n=30<\/em><\/code> signals <code><em>S<\/em><\/code> with a <em>Butterworth<\/em> low pass filter [9]. Its transfer function allows low frequencies to pass the filter, while high frequencies, such as sensor noise are reduced. Hence, depending on the order and the intensity of the filter, it is possible to smoothen the input signals. Figure 4 visualizes the smoothing of an acceleration signal by applying a <em>Butterworth<\/em> low pass filter onto it. The advantage of this procedure is the simplification and the reduction of noise within the signals. But as a disadvantage, possibly valuable information for further analysis may get lost during the filtering process. This emphasizes the need to chose the configuration parameters for the filter intensity carefully.<\/p>\n<p>According to Figure 2, the outcome of step <em>(2) Preprocessing<\/em> are <em>n<\/em> smoothed signals<em> <code>S'<\/code>, <\/em>which are handed over as input information to the next step <em>(3) Segmentation.<\/em><\/p>\n<h3><span class=\"ez-toc-section\" id=\"3-Segmentation\"><\/span>(3) Segmentation<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>In general, there are several different core concepts of segmenting recurrent motion events from continuous time-series. Algorithms using static window sizes are often triggered by a signal-based threshold and offer a comparably simple solution for segmentation of individual motion events within an activity sequence. But in the current scenario, human motion events who are unsteady and whose appearance is often changing, are the subject of examination. This especially applies for the execution of physical exercises. Their duration and the exactness of their execution always relies on the skill of an athlete as well as on their current level of strength and endurance.<\/p>\n<p>This leads to the assumption that a static segmentation algorithm is not sufficient to capture the actual segment borders of motion events of varying lengths, for a static window size inevitably leads to overlapping or cropping. In the following, two different segmentation approaches are presented to address these issues: One using static and another one using adaptive window sizes. Moreover, both of them will also become evaluated and compared below.<\/p>\n<p>But before applying one of the segmentation algorithms to the incoming <code><em>n<\/em><\/code> signals <code><em>S'<\/em><\/code>, the most meaningful signal <code><em>S<sub>MMS <\/sub><\/em><\/code>becomes identified by calculating the standard deviation\u00a0<code><i>\u03c3<\/i><\/code> for each of them. One reason for this is that segmenting each of the <code><em>n<\/em><\/code> signals sequentially is creating much more computation load than segmenting only the\u00a0<code><em>S<sub>MMS<\/sub><\/em><\/code> to identify the borders of an encompassed motion event. After identifying a segment&#8217;s start time <code><em>t<sub>start<\/sub><\/em><\/code> and end time <code><em>t<\/em><sub><em>end<\/em><\/sub><\/code> in <code><em>S<sub>MMS<\/sub><\/em><\/code>, all corresponding segments of a motion event can be cut out of the remaining <code><em>n<\/em><\/code> signals according to these timestamps. The other reason is that not all captured signals describing an activity sequence are suitable for segmentation, i.e., due to the absence of significant motion of individual extremities when carrying out certain activities. In case of <em>SensX<\/em>\u00a0this means that a segmented motion event consists of <code><em>n=30<\/em><\/code> signal segments of length <code><em><span class=\"st\">\u0394<\/span>t<\/em>=<em>t<\/em><sub><em>end<\/em><\/sub><\/code>&#8211;<em><code>t<\/code><sub><code>start<\/code><\/sub><\/em>.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"Peak-based-Segmentation-with-Static-Window-Sizes\"><\/span>Peak-based Segmentation with Static Window Sizes<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<figure id=\"attachment_17759\" aria-describedby=\"caption-attachment-17759\" style=\"width: 380px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17759\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2-300x254.png\" alt=\"Figure 5: Peak-based segmentation of an acceleration signal with a static windows size.\" width=\"380\" height=\"322\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2-300x254.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2-1024x867.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2-768x650.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2-1536x1300.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2-400x339.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2-360x305.png 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-1-2.png 1890w\" sizes=\"auto, (max-width: 380px) 100vw, 380px\" \/><figcaption id=\"caption-attachment-17759\" class=\"wp-caption-text\">Figure 5: Peak-based segmentation of an acceleration signal with a static window size.<\/figcaption><\/figure>\n<p>Figure 5 illustrates the implementation of a peak-based segmentation algorithm based on a sliding window with a static window size and applied to an acceleration signal captured with the <em>SensX<\/em> system. Therefore, the incoming signal is scanned sequentially for a local peak, which is marked with a black cross. As soon as one is found, the algorithm proceeds to the next zero crossing to identify the seed point of segmentation, which is marked with a blue cross. From this seed point on, the event segment now becomes cut out of the activity sequence by using a static window size <code><em><span class=\"st\">\u0394<\/span>t<\/em><\/code>, which is distributed back and forth along the original signal. The the distribution ratio into both directions is dependent on the individual shape of the underlying event class. After cutting a segment out of the continuous sequence, the algorithm proceeds with scanning for the peaks of following events.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"Segmentation-with-Extrema-Fingerprints-and-an-Adaptive-Window-Size\"><\/span>Segmentation with Extrema Fingerprints and an Adaptive Window Size<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>The individual steps of a second approach focusing on segmentation by utilizing adaptive window sizes is depicted in Figure 6. First, the <code><em>S<sub>MMS <\/sub><\/em><\/code>becomes selected in <em>(1)<\/em> and is heavily filtered with a <em>Butterworth<\/em> low pass in<em> (2)<\/em>. Thereby, the signal becomes transformed significantly and lots of potentially valuable information is lost. Here, the strength of the filter needs to be chosen carefully, for it also modifies the position of the signal&#8217;s zero crossings. But despite these side-effects, the signal&#8217;s shape is simplified greatly and allows for the identification of so-called extrema fingerprints.<\/p>\n<figure id=\"attachment_17715\" aria-describedby=\"caption-attachment-17715\" style=\"width: 1024px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17715 size-large\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2-1024x249.png\" alt=\"Figure 6: Adaptive signal segmention by using extrema fingerprints. \" width=\"1024\" height=\"249\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2-1024x249.png 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2-300x73.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2-768x187.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2-1536x374.png 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2-400x97.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2-360x88.png 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/segmentation-2.png 1842w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption id=\"caption-attachment-17715\" class=\"wp-caption-text\">Figure 6: Concept 2, implementing adaptive signal segmentation by using extrema fingerprints.<\/figcaption><\/figure>\n<p>These fingerprints consist of a variable number of local extrema, defining exactly one type of motion event within an activity sequence. Figure 6 <em>(3) <\/em>depicts a basic fingerprint consisting of only one local minimum followed by a local maximum. Now, the segmentation algorithm again proceeds sequentially through the <code><em>S<sub>MMS<\/sub><\/em><\/code> with an initially fixed window size and searches for the given fingerprint. The size is determined with the help of the auto-correlation of the<em> <code>S<sub>MMS<\/sub><\/code>, <\/em>providing information about the initial duration of the encompassed signal frequencies. As soon as a corresponding pattern is found, the algorithm rewinds to the preceding zero crossing of the first extremum which marks the segment&#8217;s starting time <em><code>t<sub>start<\/sub><\/code>.<\/em> At the last extremum it fast-forwards to the next zero crossing which marks the segments end time at <code><em>t<sub>end<\/sub><\/em><\/code>. These timestamps are now used to cut the segments of the corresponding motion event out of all <code><em>n=30<\/em><\/code> input signals <em>(4)<\/em>, the segmentation proceeds at the last end time <code><em>t<sub>end<\/sub><\/em><\/code>.<\/p>\n<p>In an offline case (e.g., for model training), the outcome of <em>(3) Segmentation<\/em> is a set of motion events <code><em>E={e<sub>1<\/sub>,...,e<sub>i<\/sub>}<\/em><\/code>, in the targeted online scenario (real-time analysis of physical exercises) it is one motion event <em>e<sub>i <\/sub><\/em>, which becomes handed over to the next step <em>(4) Feature-Engineering<\/em>. Thereby, each element <em>e<sub>i<\/sub><\/em> consists of <code><em>n=30<\/em><\/code> signal segments <em><code>S'<sub>z<\/sub><\/code>. <\/em>All elements <code><em>e<sub>i<\/sub><\/em><\/code> may vary in their individual length, while the temporal length of all <code><em>S'<sub>z<\/sub><\/em><\/code> within one element <code><em>e<sub>i<\/sub><\/em><\/code> is equal.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"4-Feature-Engineering\"><\/span>(4) Feature-Engineering<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>In the following, the feature-engineering process for the current use-case, the analysis of physical exercises relying on the SensX sensor system, is described. There are many other automated or manual concepts to address that issue as well as the possibility to leap this step due to the benefits of supervised approaches based on Deep Learning. But because of the fact that in the current use case ad-hoc training of models for new motion events as well as classification and assessment in realtreime is mandatory, a resource-efficient as well as robust classification architecture is needed. To account for these preconditions, an analysis approach based on a compact, hand-crafted feature set and classification algorithms for supervised learning is proposed.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"Inspecting-and-Labeling-the-Dataset\"><\/span>Inspecting and Labeling the Dataset<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>Prior to the actual feature engineering the dataset used for model training and testing is inspected and labeled to get an understanding of the underlying data. The dataset was compiled during a comprehensive study encompassing <em>26<\/em> athletes conducting <em>6<\/em> different body weight exercises, namely <em>Crunches (CR), Lunges (LU), Jumping Jacks (JJ), Bicycle Crunches (BC), Knee Bends (KB)<\/em>, and<em> Mountain Climbers (MC)<\/em>. Each exercise was executed by each athlete for <em>3<\/em> sets, each set with <em>20<\/em> repetitions. All athletes were instructed with coaching videos showing the exact execution of the exercises in prior, moreover they were recorded on video during the workout. This way\u00a0<em>7,534<\/em> individual exercise events have been tracked.<\/p>\n<p>In order to label the dataset not only in terms of the conducted exercises, but also concerning the quality class with which the exercise was executed, the following approach was used. Each event <em>e<sub>i<\/sub><\/em> symbolizing one individual repetition of an exercise class was rated by two experts concerning their quality of conduction on basis of the tracked video information. The discretization of subjective perception is a challenging task and addressed within this work as follows:<\/p>\n<p style=\"text-align: left;\">\\(L_{i} = p_{s} + \\sum_{n=1}^i p_{a_{n}}, \\; \\textrm{when} \\; L_{i} &gt; 5: L_{i} = 5 \\)<\/p>\n<p>Here, each repetition is initially rated with a quality score of <code><em>p<sub>s<\/sub>=1<\/em><\/code> which corresponds to the best quality rating on a scale from <em>1<\/em> to\u00a0<em>5<\/em>. Subsequently, penalty scores <code><em>p<sub>a<\/sub><\/em><\/code> are added for each fault in conduction according to the fault&#8217;s severeness: <em>0.25<\/em> for slight,<em> 0.5<\/em> for medium, and<em> 1<\/em> for severe faults. These penalty scores are summed up and define the final quality label <code><em>L<sub>i<\/sub><\/em><\/code> for each individual<em> <code>e<sub>i<\/sub><\/code>. <\/em>If a specific repetition&#8217;s score is bigger than <em>5<\/em>, it is set to <em>5<\/em> which resembles the worst conduction quality.<\/p>\n<h4><span class=\"ez-toc-section\" id=\"Building-compact-feature-vectors\"><\/span>Building compact feature vectors<span class=\"ez-toc-section-end\"><\/span><\/h4>\n<p>The basis for the following construction of compact feature vectors are pairs of an individual motion event <code><em>e<sub>i<\/sub><\/em><\/code> and the corresponding label <code><em>L<sub>i<\/sub><\/em><\/code>, where the label is defined by the quality score as determined in the previous section. Concerning the construction, Figure 7 provides insights on how an expressive feature set for describing motion events is chosen.<\/p>\n<figure id=\"attachment_17792\" aria-describedby=\"caption-attachment-17792\" style=\"width: 1024px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17792 size-large\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/std-bicycle-1-1024x664.png\" alt=\"Standard deviations of the input signals of 100 randomly chosen motion events of the quality classes 1 and 4, respectively for the exercise Bicycle Crunch.\" width=\"1024\" height=\"664\" \/><figcaption id=\"caption-attachment-17792\" class=\"wp-caption-text\">Figure 7: Standard deviations of the input signals of <em>100<\/em> randomly chosen motion events of the quality classes <em>1<\/em> and <em>4<\/em>, respectively (<em>Bicycle<\/em> <em>Crunch<\/em>).<\/figcaption><\/figure>\n<p>Depicted are the standard deviations of all <code><em>n=30<\/em><\/code> input signals for <em>100<\/em> randomly chosen motion events of the exercise <em>Bicycle<\/em> <em>Crunch<\/em> with the quality classes <em>1<\/em> and <em>4<\/em>, respectively. The red boxes mark the signals with the highest standard deviation, which are the acceleration of the athletes&#8216; arms into <code><em>Z<\/em><\/code>-dimension and their rotation into <code><em>X<\/em><\/code>-dimension. These visualize one of the singularities of <em>Bicycle<\/em> <em>Crunches<\/em>, where an athlete lies on the back and brings their arms to their knees alternately. The plot shows that clean executions labeled with quality class <em>1<\/em> as well as with class <em>4<\/em> both show a nearly comparable acceleration of the arms into the <em><code>Z<\/code>&#8211;<\/em>direction, the acceleration for class <em>1<\/em> is only slightly higher. In contrast to that, the rotation in <em><code>X<\/code>&#8211;<\/em>direction is much higher for class <em>1<\/em> events compared to those of class <em>4<\/em>. Similar findings can be made for the other signals. These observations lead to the assumption, that only the standard deviations of the <code><em>n<\/em><\/code> signals of an event <code><em>e<sub>i<\/sub><\/em><\/code> are already sufficient to describe its quality and to distinguish it from events of other activity classes.<\/p>\n<figure id=\"attachment_17803\" aria-describedby=\"caption-attachment-17803\" style=\"width: 555px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-17803 \" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-300x80.jpg\" alt=\"Figure 8: Creation of compact and labeled feature vectors Xi with 31 digits.\" width=\"555\" height=\"148\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-300x80.jpg 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-1024x272.jpg 1024w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-768x204.jpg 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-1536x409.jpg 1536w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-1920x511.jpg 1920w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-400x106.jpg 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1-360x96.jpg 360w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/11\/rec-feature-vector-creation-1.jpg 2000w\" sizes=\"auto, (max-width: 555px) 100vw, 555px\" \/><figcaption id=\"caption-attachment-17803\" class=\"wp-caption-text\">Figure 8: Compact and labeled feature vectors <em>X<sub>i<\/sub><\/em> with 31 digits.<\/figcaption><\/figure>\n<p>Based on this assumption, Figure 8 shows the shape of the final <em>31<\/em> digits feature vector <code><em>X<sub>i<\/sub><\/em><\/code>. The first <em>30<\/em> digits contain the standard deviation <span style=\"font-size: 14pt;\"><code><i>\u03c3<\/i><\/code><\/span><span style=\"font-size: 10pt;\"><em><sub><code>S'z<\/code>\u00a0 <\/sub><\/em><\/span>of each of the <code><em>n=30<\/em><\/code> signal segments <code><em>S'<sub>z<\/sub><\/em><\/code> of an event <code><em>e<sub>i<\/sub><\/em><\/code>. Additionally, the duration <code><em><span class=\"st\">\u0394<\/span>t<\/em>=<em>t<\/em><sub><em>end<\/em><\/sub>-<em>t<sub>start<\/sub><\/em><\/code> of\u00a0<code><em>e<sub>i<\/sub><\/em><\/code> is added. Subsequently, by assigning the corresponding label <code><em>L<sub>i<\/sub><\/em><\/code> to <code><em>X<sub>i<\/sub><\/em><\/code> the final event instance <code><em>I<sub>i<\/sub>={X<sub>i<\/sub>|L<sub>i<\/sub>}<\/em><\/code> is created for each <code><em>e<sub>i<\/sub><\/em><\/code>. These instances <code><em>I<sub>i<\/sub><\/em><\/code> are now functioning as the input for the model training as well as for the next step, <em>(5) Classification<\/em>.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"5-Classification\"><\/span>(5) Classification<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Prior to performing classification, a supervised learning classifier must be trained. Different classifiers and their individual performances are presented in the next section during evaluation. Within this section, we assume that a pre-trained classifier already exists. Input for the classification is a feature vector <code><em>X<sub>i<\/sub><\/em><\/code> as described in <em>(4) Feature-Engineering<\/em>, which is now processed as depicted in Figure 3, <em>(5) Classification<\/em>. The pre-trained model holds a set of possible event labels <code><em>L={l<sub>1<\/sub>,...,l<sub>k<\/sub>}<\/em>,<\/code> which in terms of the following evaluation correspond to either an activity class or a quality class, depending on the targeted use case. For each incoming feature vector <code><em>X<sub>i<\/sub><\/em><\/code>, which is assigned to a corresponding event <code><em>e<sub>i<\/sub><\/em><\/code>, a set containing the probabilities <code><em>P<sub>ei<\/sub>={p<sub>1<\/sub>,...,p<sub>k<\/sub>}<\/em><\/code> which describe the chance that <code><em>X<sub>i<\/sub><\/em><\/code> belongs to each label <code><em>l<\/em><\/code>, respectively, is calculated. The additional parameter <code><em>\u03b8<\/em><\/code> depicted in Figure 3 illustrates additional hyper-parameters, which are necessary for some types of classifiers and can have an impact on the final classification output. Subsequently, the final label <code><em>L<sub>i<\/sub><\/em><\/code> is derived from the max argument <code><em>p(l|X<sub>i<\/sub>,\u03b8)<\/em><\/code> within <code><em>P<sub>ei<\/sub><\/em><\/code>, and thereby defines the class of the corresponding event <code><em>e<sub>i<\/sub><\/em><\/code>.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Evaluation\"><\/span>Evaluation<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>In this section the capabilities of <em>SensX<\/em> built on the concepts described above are evaluated in terms of classification performance and potential real-time analysis. Moreover, the efficiency of adaptive segmentation in comparison to a static segmentation approach is explored.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Qualitative-Assessment-of-Human-Activities\"><\/span>Qualitative Assessment of Human Activities<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Within the classification for qualitative assessment, four different supervised machine learning classfiers were trained and evaluated: a <em>Random Forest (RF)<\/em> and a <em>C4.5<\/em> decision tree classifier, a <em>Support Vector Machine (SVM)<\/em>, and a <em>Naive Bayes (NB)<\/em> classifier. Table 1 shows the results of qualitative assessment by using the above described, discretized quality classes for labeling. The best results for all exercises are achieved by using the <em>RF<\/em> classifier with an average mean of<em> 89.7%<\/em> accuracy within a <em>10<\/em>-fold cross validation. Thereby, <em>7,413<\/em> exercise events 0ut of all <em>7,534<\/em> recoded events were used for evaluation, while <em>121<\/em> could not be extraced during the adaptive segmentation process.<\/p>\n<span id=\"tablepress-38-description\" class=\"tablepress-table-description tablepress-table-description-id-38\">Table1: Results for the qualitative assessment of physical exercises sorted by classifier<\/span>\n\n<table id=\"tablepress-38\" class=\"tablepress tablepress-id-38\" aria-describedby=\"tablepress-38-description\">\n<thead>\n<tr class=\"row-1\">\n\t<th class=\"column-1\">Classifier<\/th><th class=\"column-2\">CR (%)<\/th><th class=\"column-3\">LU (%)<\/th><th class=\"column-4\">JJ (%)<\/th><th class=\"column-5\">BC (%)<\/th><th class=\"column-6\">KB (%)<\/th><th class=\"column-7\">MC (%)<\/th><th class=\"column-8\">\u00d8 (%)<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"row-striping row-hover\">\n<tr class=\"row-2\">\n\t<td class=\"column-1\">RF<\/td><td class=\"column-2\">88,00<\/td><td class=\"column-3\">90,00<\/td><td class=\"column-4\">92,10<\/td><td class=\"column-5\">92,10<\/td><td class=\"column-6\">93,40<\/td><td class=\"column-7\">82,50<\/td><td class=\"column-8\">89,70<\/td>\n<\/tr>\n<tr class=\"row-3\">\n\t<td class=\"column-1\">C4.5<\/td><td class=\"column-2\">79,10<\/td><td class=\"column-3\">80,50<\/td><td class=\"column-4\">83,60<\/td><td class=\"column-5\">82,10<\/td><td class=\"column-6\">84,60<\/td><td class=\"column-7\">67,90<\/td><td class=\"column-8\">79,60<\/td>\n<\/tr>\n<tr class=\"row-4\">\n\t<td class=\"column-1\">SVM<\/td><td class=\"column-2\">73,20<\/td><td class=\"column-3\">80,80<\/td><td class=\"column-4\">85,00<\/td><td class=\"column-5\">85,70<\/td><td class=\"column-6\">80,20<\/td><td class=\"column-7\">60,20<\/td><td class=\"column-8\">77,50<\/td>\n<\/tr>\n<tr class=\"row-5\">\n\t<td class=\"column-1\">NB<\/td><td class=\"column-2\">54,30<\/td><td class=\"column-3\">70,30<\/td><td class=\"column-4\">72,50<\/td><td class=\"column-5\">76,30<\/td><td class=\"column-6\">58,50<\/td><td class=\"column-7\">54,60<\/td><td class=\"column-8\">64,40<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<!-- #tablepress-38 from cache -->\n<p>The <em>C4.5<\/em> tree classifier and the <em>SVM<\/em> show results mostly comparable to each other, while the <em>NB<\/em> performs worst. In exchange, the <em>NB<\/em> provides the best runtime results by far due to its simplicity. These results prove that even a compact feature vector containing only the standard deviations of acceleration and motion information tracked from a proband&#8217;s extremities and their chest is sufficient for a fine-grained quality assessment.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Static-vs-Adaptive-Segmentation\"><\/span>Static vs. Adaptive Segmentation<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>Besides the classification capabilities within qualitative analysis, the accuracy, runtime, and impact on classification performance by using static or adaptive segmentation algorithms was evaluated. Therefore, the exercises <em>Knee Bend<\/em> and <em>Lunges<\/em> were chosen, which are representative for the whole dataset. Figure 9 visualizes the segmentation of the <code><em>S<sub>MMS<\/sub><\/em><\/code> of an exercise sequence containing <em>20<\/em> recurrent exercise events. On the left, several gaps and overlaps between the segmented events are visible, while the adaptive segmentation algorithm produces seamless segments as depicted on the right.<\/p>\n<figure id=\"attachment_17860\" aria-describedby=\"caption-attachment-17860\" style=\"width: 1024px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-17860\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/static-vs-adaptive-1024x208.png\" alt=\"Visualization of a sequence of recurrent exercise events with static segmentation (left) and adaptive segmentation (right). \" width=\"1024\" height=\"208\" \/><figcaption id=\"caption-attachment-17860\" class=\"wp-caption-text\">Figure 9: Visualization of a sequence of recurrent exercise events with static segmentation and adaptive segmentation in comparison.<\/figcaption><\/figure>\n<p>An interesting question is if the seamless adaptive segmentation really has an impact on the final classification results. Table 2 compares the accuracies within supervised learning, again by utilizing the <em>RF<\/em>, the <em>C4.5<\/em>, the <em>NB<\/em>, and a <em>SVM<\/em> classifier. Additionally, an automatically configured <em>Hyper Parameter Optimized (HPO)<\/em> classifier is used. Here, a defined time span of\u00a0 <code><em><span class=\"st\">\u0394<\/span>t=15<\/em><\/code> minutes is given to identify an optimized classifier and the corresponding hyper-parameters automatically.<\/p>\n<span id=\"tablepress-39-description\" class=\"tablepress-table-description tablepress-table-description-id-39\">Table 2: Classification results for sheer activity recognition while using adaptive and static segmentation, respectively.<\/span>\n\n<table id=\"tablepress-39\" class=\"tablepress tablepress-id-39\" aria-describedby=\"tablepress-39-description\">\n<thead>\n<tr class=\"row-1\">\n\t<th class=\"column-1\">Classifier<\/th><th class=\"column-2\">KN, adaptive (%)<\/th><th class=\"column-3\">KN, static (%)<\/th><th class=\"column-4\">LU, adaptive (%)<\/th><th class=\"column-5\">LU, static (%)<\/th>\n<\/tr>\n<\/thead>\n<tbody class=\"row-striping row-hover\">\n<tr class=\"row-2\">\n\t<td class=\"column-1\">RF<\/td><td class=\"column-2\">93,40<\/td><td class=\"column-3\">90,55<\/td><td class=\"column-4\">90,00<\/td><td class=\"column-5\">87,45<\/td>\n<\/tr>\n<tr class=\"row-3\">\n\t<td class=\"column-1\">C4.5<\/td><td class=\"column-2\">84,60<\/td><td class=\"column-3\">77,46<\/td><td class=\"column-4\">80,50<\/td><td class=\"column-5\">73,94<\/td>\n<\/tr>\n<tr class=\"row-4\">\n\t<td class=\"column-1\">NB<\/td><td class=\"column-2\">58,50<\/td><td class=\"column-3\">52,54<\/td><td class=\"column-4\">70,30<\/td><td class=\"column-5\">65,55<\/td>\n<\/tr>\n<tr class=\"row-5\">\n\t<td class=\"column-1\">SVM<\/td><td class=\"column-2\">80,20<\/td><td class=\"column-3\">69,20<\/td><td class=\"column-4\">80,80<\/td><td class=\"column-5\">77,61<\/td>\n<\/tr>\n<tr class=\"row-6\">\n\t<td class=\"column-1\">HPO<\/td><td class=\"column-2\">100,0<\/td><td class=\"column-3\">99,60<\/td><td class=\"column-4\">100,0<\/td><td class=\"column-5\">93,76<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<!-- #tablepress-39 from cache -->\n<p>Again, for all experiments a <em>10<\/em>-fold cross validation is implemented. The results show that training and classification while using adaptive segmentation always leads to a better accuracy with a mean average of <em>5.06%<\/em>. But the better classification performance also comes with some limitations. On the one hand, the adaptive algorithm needs <em>221ms<\/em> for the segmentation of one event in average, while the static approach needs <em>119ms<\/em>. Moreover, the static algorithm proved to be more effective by segmenting <em>99.19% (7,473)<\/em> events out of all <em>7,534<\/em> tracked, while the adaptive algorithm could only extract <em>98.39% (7,413)<\/em> items.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Runtime-During-Training-and-Classification\"><\/span>Runtime During Training and Classification<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>One of the initial requirements to the <em>SensX<\/em> sensor system is the possibility to learn new activity classes ad-hoc as well as to classify given events in real-time, virtually. For the classification with adaptive segmentation, the architecture needs <em>1.791s<\/em> in average: <em>1.5s<\/em>\u00a0until an exercise event was conducted by the athlete, <em>221ms<\/em> for its segmentation and <em>70ms<\/em> for the actual classification.<\/p>\n<figure id=\"attachment_17872\" aria-describedby=\"caption-attachment-17872\" style=\"width: 922px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-17872\" src=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/runtime_classification_performance.png\" alt=\"Figure 10: Comparison of classifiaction accuracy in dependence of the number of instances used for training with different classifiers.\" width=\"922\" height=\"286\" srcset=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/runtime_classification_performance.png 922w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/runtime_classification_performance-300x93.png 300w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/runtime_classification_performance-768x238.png 768w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/runtime_classification_performance-400x124.png 400w, https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/runtime_classification_performance-360x112.png 360w\" sizes=\"auto, (max-width: 922px) 100vw, 922px\" \/><figcaption id=\"caption-attachment-17872\" class=\"wp-caption-text\">Figure 10: Comparison of classifier accuracy depending on the number of instances used for training with different classifiers.<\/figcaption><\/figure>\n<p>Figure 10 shows the results for model training with the <em>4<\/em> different classifiers and their accuracy during validation with a decreasing number of training instances. In this experiment, the former quality labels <em>L<sub>i<\/sub><\/em> for each <em>e<sub>i<\/sub><\/em> are substituted with their activity class names (<em>CR, LU<\/em>, etc.). The classifiers are trained with all <em>7,413 <\/em>exercise event instances at first, subsequently the training set was reduced by steps of <em>150<\/em> randomly picked instances and a new model is trained, respectively. The validation is done with all <em>7,413<\/em> instances for each model. When only less than <em>150 <\/em>instances are left, the reducing steps are decreased to <em>10<\/em> for each training in order to increase the resolution of results. Figure 10 shows that training of a model with roughly <em>200<\/em> instances of exercise events is already sufficient for all utilized classifiers, to achieve an accuracy of more than <em>95% <\/em>for sheer activity recognition.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Summary-and-Outlook\"><\/span>Summary and Outlook<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>This article presents a distributed sensor system for tracking and analysis of human motion called <em>SensX<\/em>. Therefore, a process chain of supervised machine learning consisting of <em>5<\/em> crucial steps is introduced and implemented. Additionally, a static and an adaptive segmentation algorithm for multi-dimensional timeseries are compared regarding their advantages and disadvantages. <em>SensX<\/em> is capable of tracking and analyzing human motion in virtually real-time and also includes a concept for qualitative assessment of individual motion events. These capabilities are verified in the evaluation together with further investigations concerning ad-hoc training of new models and the duration of event classification.<\/p>\n<p>None the less, there are many open issues which are not covered by this article, such as the identification of specific malpositions during exercise as well as the inspection of non-recurrent motion events, which are not extractable by using the proposed segmentation algorithms. These and other open tasks are subject of ongoing investigations and experiments within this field of research.<\/p>\n<p>[1] C. Marouane. <em>Visuelle Verfahren f\u00fcr ortsbezogene Dienste<\/em>. PhD thesis, LMU, 2017.<\/p>\n<p>[2] A. Pfister, A. M. West, S. Bronner, and J. A. Noah. <em>Comparative Abilities of Microsoft Kinect and Vicon 3D motion Capture for Gait Analysis<\/em>. <em>Journal of Medical Engineering &amp; Technology<\/em>, 38:274\u2013280, 2014.<\/p>\n<p>[3] T. Komura, B. Lam, R. W. Lau, and H. Leung. <em>e-Learning Martial Arts<\/em>. In <em>International Conference on Web-Based Learning<\/em>, pages 239\u2013248. Springer, 2006.<\/p>\n<p>[4] <a href=\"https:\/\/www.xsens.com\/\">https:\/\/www.xsens.com\/<\/a>, last visit October 7<sup>th<\/sup>, 2019<\/p>\n<p>[5] <a href=\"https:\/\/www.getenflux.com\/\">https:\/\/www.getenflux.com\/<\/a> , last visit October 7<sup>th<\/sup>, 2019<\/p>\n<p>[6] A. Ebert, M. Kiermeier, C. Marouane, and C. Linnhoff-Popien. <em>SensX: About Sensing and Assessment of Complex Human Motion.<\/em> In <em>14th IEEE International Conference on Networking, Sensing and Control (ICNSC)<\/em>, Calabria, IEEE Xplore, 2017<\/p>\n<p>[7] <a href=\"https:\/\/mbientlab.com\/\">https:\/\/mbientlab.com\/metamotionc\/<\/a> mbientlab Metawear wearable sensors, last visited November 26<sup>th<\/sup>, 2019<\/p>\n<p>[8] A. Bulling, U. Blanke, and B. Schiele. <em>A tutorial on human activity recognition using body-worn inertial sensors.<\/em> In <i>ACM Computing Surveys (CSUR)<\/i>, 2014<\/p>\n<p>[9] S. Ivan W., and C. Sidney Burrus. <em>Generalized digital Butterworth filter design.<\/em> IEEE Transactions on signal processing, 46.6: 1688-1694, 1998<\/p>\n<p><span class=\"person_name\">[10] A. Ebert.\u00a0<\/span> <em>Erfassung, Erkennung und qualitative Analyse von menschlicher Bewegung<\/em>. Dissertation, LMU M\u00fcnchen: Fakult\u00e4t f\u00fcr Mathematik, Informatik und Statistik, 2019<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The visions created by the Internet of Things, which encompass the seamless embedding of the virtual world into daily human life have become reality by now. In that context, the ongoing miniaturization of wearables, the ubiquitous availability of capable and mobile computation devices, and the fast progress within the domain of machine learning accelerated the [&hellip;]<\/p>\n","protected":false},"author":133,"featured_media":17973,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"ep_exclude_from_search":false,"footnotes":""},"tags":[74],"service":[712],"coauthors":[{"id":133,"display_name":"Andr\u00e9 Ebert","user_nicename":"aebert"}],"class_list":["post-17354","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","tag-iot","service-internet-of-things-iot"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Recognizing &amp; Assessing Human Activity with SensX Wearable Sensors<\/title>\n<meta name=\"description\" content=\"I used the distributed sensor system SensX to detect, identify and assess the quality of human motion in workouts. This article introduces the sequential process chain I implemented for analyzing multi-dimensional time-series with algorithms of supervised machine learning.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/\" \/>\n<meta property=\"og:locale\" content=\"de_DE\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Recognizing &amp; Assessing Human Activity with SensX Wearable Sensors\" \/>\n<meta property=\"og:description\" content=\"I used the distributed sensor system SensX to detect, identify and assess the quality of human motion in workouts. This article introduces the sequential process chain I implemented for analyzing multi-dimensional time-series with algorithms of supervised machine learning.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/\" \/>\n<meta property=\"og:site_name\" content=\"inovex GmbH\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/inovexde\" \/>\n<meta property=\"article:published_time\" content=\"2020-01-02T06:43:53+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-12-02T08:11:10+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Andr\u00e9 Ebert\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors-1024x576.png\" \/>\n<meta name=\"twitter:creator\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:site\" content=\"@inovexgmbh\" \/>\n<meta name=\"twitter:label1\" content=\"Verfasst von\" \/>\n\t<meta name=\"twitter:data1\" content=\"Andr\u00e9 Ebert\" \/>\n\t<meta name=\"twitter:label2\" content=\"Gesch\u00e4tzte Lesezeit\" \/>\n\t<meta name=\"twitter:data2\" content=\"26\u00a0Minuten\" \/>\n\t<meta name=\"twitter:label3\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data3\" content=\"Andr\u00e9 Ebert\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/\"},\"author\":{\"name\":\"Andr\u00e9 Ebert\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/9e0f5e7089052862c68205b2afd8be83\"},\"headline\":\"Recognizing &#038; Assessing Recurrent Human Activity with Wearable Sensors\",\"datePublished\":\"2020-01-02T06:43:53+00:00\",\"dateModified\":\"2022-12-02T08:11:10+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/\"},\"wordCount\":3921,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2019\\\/12\\\/recurrent-human-activity-wearable-sensors.png\",\"keywords\":[\"IoT\"],\"articleSection\":[\"Analytics\",\"English Content\",\"General\"],\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/\",\"name\":\"Recognizing & Assessing Human Activity with SensX Wearable Sensors\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2019\\\/12\\\/recurrent-human-activity-wearable-sensors.png\",\"datePublished\":\"2020-01-02T06:43:53+00:00\",\"dateModified\":\"2022-12-02T08:11:10+00:00\",\"description\":\"I used the distributed sensor system SensX to detect, identify and assess the quality of human motion in workouts. This article introduces the sequential process chain I implemented for analyzing multi-dimensional time-series with algorithms of supervised machine learning.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#breadcrumb\"},\"inLanguage\":\"de\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2019\\\/12\\\/recurrent-human-activity-wearable-sensors.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2019\\\/12\\\/recurrent-human-activity-wearable-sensors.png\",\"width\":1920,\"height\":1080,\"caption\":\"Person working out generating sensor data\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Recognizing &#038; Assessing Recurrent Human Activity with Wearable Sensors\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#website\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"name\":\"inovex GmbH\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"de\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#organization\",\"name\":\"inovex GmbH\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"contentUrl\":\"https:\\\/\\\/www.inovex.de\\\/wp-content\\\/uploads\\\/2021\\\/03\\\/inovex-logo-16-9-1.png\",\"width\":1921,\"height\":1081,\"caption\":\"inovex GmbH\"},\"image\":{\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/inovexde\",\"https:\\\/\\\/x.com\\\/inovexgmbh\",\"https:\\\/\\\/www.instagram.com\\\/inovexlife\\\/\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/inovex\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UC7r66GT14hROB_RQsQBAQUQ\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/#\\\/schema\\\/person\\\/9e0f5e7089052862c68205b2afd8be83\",\"name\":\"Andr\u00e9 Ebert\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"de\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c79c755a18d43fa4068c1bc5eb1354839140ec364465fe4bdb8ed668740925ec?s=96&d=retro&r=gc010af23d21d9b481a0d0915276e4640\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c79c755a18d43fa4068c1bc5eb1354839140ec364465fe4bdb8ed668740925ec?s=96&d=retro&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/c79c755a18d43fa4068c1bc5eb1354839140ec364465fe4bdb8ed668740925ec?s=96&d=retro&r=g\",\"caption\":\"Andr\u00e9 Ebert\"},\"url\":\"https:\\\/\\\/www.inovex.de\\\/de\\\/blog\\\/author\\\/aebert\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Recognizing & Assessing Human Activity with SensX Wearable Sensors","description":"I used the distributed sensor system SensX to detect, identify and assess the quality of human motion in workouts. This article introduces the sequential process chain I implemented for analyzing multi-dimensional time-series with algorithms of supervised machine learning.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/","og_locale":"de_DE","og_type":"article","og_title":"Recognizing & Assessing Human Activity with SensX Wearable Sensors","og_description":"I used the distributed sensor system SensX to detect, identify and assess the quality of human motion in workouts. This article introduces the sequential process chain I implemented for analyzing multi-dimensional time-series with algorithms of supervised machine learning.","og_url":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/","og_site_name":"inovex GmbH","article_publisher":"https:\/\/www.facebook.com\/inovexde","article_published_time":"2020-01-02T06:43:53+00:00","article_modified_time":"2022-12-02T08:11:10+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors.png","type":"image\/png"}],"author":"Andr\u00e9 Ebert","twitter_card":"summary_large_image","twitter_image":"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors-1024x576.png","twitter_creator":"@inovexgmbh","twitter_site":"@inovexgmbh","twitter_misc":{"Verfasst von":"Andr\u00e9 Ebert","Gesch\u00e4tzte Lesezeit":"26\u00a0Minuten","Written by":"Andr\u00e9 Ebert"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#article","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/"},"author":{"name":"Andr\u00e9 Ebert","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/9e0f5e7089052862c68205b2afd8be83"},"headline":"Recognizing &#038; Assessing Recurrent Human Activity with Wearable Sensors","datePublished":"2020-01-02T06:43:53+00:00","dateModified":"2022-12-02T08:11:10+00:00","mainEntityOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/"},"wordCount":3921,"commentCount":0,"publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors.png","keywords":["IoT"],"articleSection":["Analytics","English Content","General"],"inLanguage":"de","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/","url":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/","name":"Recognizing & Assessing Human Activity with SensX Wearable Sensors","isPartOf":{"@id":"https:\/\/www.inovex.de\/de\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#primaryimage"},"image":{"@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#primaryimage"},"thumbnailUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors.png","datePublished":"2020-01-02T06:43:53+00:00","dateModified":"2022-12-02T08:11:10+00:00","description":"I used the distributed sensor system SensX to detect, identify and assess the quality of human motion in workouts. This article introduces the sequential process chain I implemented for analyzing multi-dimensional time-series with algorithms of supervised machine learning.","breadcrumb":{"@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#breadcrumb"},"inLanguage":"de","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/"]}]},{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#primaryimage","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2019\/12\/recurrent-human-activity-wearable-sensors.png","width":1920,"height":1080,"caption":"Person working out generating sensor data"},{"@type":"BreadcrumbList","@id":"https:\/\/www.inovex.de\/de\/blog\/recognizing-assessing-recurrent-human-activity-with-wearable-sensors\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.inovex.de\/de\/"},{"@type":"ListItem","position":2,"name":"Recognizing &#038; Assessing Recurrent Human Activity with Wearable Sensors"}]},{"@type":"WebSite","@id":"https:\/\/www.inovex.de\/de\/#website","url":"https:\/\/www.inovex.de\/de\/","name":"inovex GmbH","description":"","publisher":{"@id":"https:\/\/www.inovex.de\/de\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.inovex.de\/de\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"de"},{"@type":"Organization","@id":"https:\/\/www.inovex.de\/de\/#organization","name":"inovex GmbH","url":"https:\/\/www.inovex.de\/de\/","logo":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/","url":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","contentUrl":"https:\/\/www.inovex.de\/wp-content\/uploads\/2021\/03\/inovex-logo-16-9-1.png","width":1921,"height":1081,"caption":"inovex GmbH"},"image":{"@id":"https:\/\/www.inovex.de\/de\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/inovexde","https:\/\/x.com\/inovexgmbh","https:\/\/www.instagram.com\/inovexlife\/","https:\/\/www.linkedin.com\/company\/inovex","https:\/\/www.youtube.com\/channel\/UC7r66GT14hROB_RQsQBAQUQ"]},{"@type":"Person","@id":"https:\/\/www.inovex.de\/de\/#\/schema\/person\/9e0f5e7089052862c68205b2afd8be83","name":"Andr\u00e9 Ebert","image":{"@type":"ImageObject","inLanguage":"de","@id":"https:\/\/secure.gravatar.com\/avatar\/c79c755a18d43fa4068c1bc5eb1354839140ec364465fe4bdb8ed668740925ec?s=96&d=retro&r=gc010af23d21d9b481a0d0915276e4640","url":"https:\/\/secure.gravatar.com\/avatar\/c79c755a18d43fa4068c1bc5eb1354839140ec364465fe4bdb8ed668740925ec?s=96&d=retro&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c79c755a18d43fa4068c1bc5eb1354839140ec364465fe4bdb8ed668740925ec?s=96&d=retro&r=g","caption":"Andr\u00e9 Ebert"},"url":"https:\/\/www.inovex.de\/de\/blog\/author\/aebert\/"}]}},"_links":{"self":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/17354","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/users\/133"}],"replies":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/comments?post=17354"}],"version-history":[{"count":4,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/17354\/revisions"}],"predecessor-version":[{"id":39807,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/posts\/17354\/revisions\/39807"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media\/17973"}],"wp:attachment":[{"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/media?parent=17354"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/tags?post=17354"},{"taxonomy":"service","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/service?post=17354"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.inovex.de\/de\/wp-json\/wp\/v2\/coauthors?post=17354"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}