SlideShare una empresa de Scribd logo
1 de 44
Descargar para leer sin conexión
HUMAN ACTIVITY RECOGNITION
USING SMARTPHONE
Submitted in partial fulfilment of the
requirements for the award of the degree of
Bachelor of Technology
in
Computer Science and Engineering
Guide : Submitted By:
Kundan Kumar Chandan Anup Kumar Singh : 00220902712
Assistant professor Randhir Kumar Gupta : 03120902712
Comp. Sc. & Engg. Dept. Shubham Jain : 04020902712
Kunal Kumar Gupta : 08620902712
G.B. Pant Govt. Engineering College, New Delhi
GGS Indraprastha University, Dwarka, Sector 16C
(2012-2016)
i
Certificate
This is to certify that the project report entitled “Human Activity Recognition Using
Smartphone” which is submitted by Mr. Anup Kumar Singh, Mr. Randhir Kumar Gupta,
Mr. Shubham Jain and Mr. Kunal Kumar Gupta, Roll No. 00220902712, 03120902712,
04020902712 and 08620902712 respectively are an authentic work carried out by them at G.B.
Pant Govt. Engineering College under my guidance. The matter embodied in this project
work has not been submitted earlier for the award of any degree to the best of my knowledge
and belief.
Kundan Kumar Chandan
Assistant Professor
Comp. Sc. & Engg. Department
G.B. Pant Govt. Engg. College
Okhla Phase-3, New Delhi
Dr. Krishna Singh
Head of Department
Computer Science and Engineering Department
G.B. Pant Govt. Engineering College, New Delhi
ii
Acknowledgement
On the submission of our project report on “Human Activity Recognition Using
Smartphone”, we would like to extend our gratitude and sincere thanks to our
supervisor Mr. Kundan Kumar Chandan, Assistant professor Department of
Computer Science and Engineering for his constant motivation and support
during the course of our work in the last semester. We truly appreciate and value
his esteemed guidance and encouragement from the beginning to the end of this
project. We are indebted to his for having helped us shape the problem and
providing insights towards the solution.
We want to thank all our teachers for providing a solid background for our
studies thereafter. They have been great sources of inspiration to us and we thank
them from the bottom of our heart.
Above all, we would like to thank all our friends whose direct and indirect
support helped us complete our project in time. The project would have been
impossible without their perpetual moral support.
Anup Kumar Singh : 00220902712 (anupksingh95@gmail.com)
Randhir Kumar Gupta : 03120902712 (randhirgupta92@gmail.com)
Shubham Jain : 04020902712 (shubham1993jain@gmail.com)
Kunal Kumar Gupta : 08620902712 (kunalgupta141993@gmail.com)
iii
Abstract
Human activity recognition has wide applications in medical research and security
system. In this project, we design a robust activity recognition system based on a
Smartphone.
The system uses a 3-dimentional Smartphone accelerometer and gyroscope to collect
time series signals, from which 31 features are generated in both time and frequency
domain. Activities are classified using 4 different passive learning methods, i.e.,
quadratic classifier, k-nearest neighbor algorithm, support vector machine, and artificial
neural networks.
We also apply active learning algorithms to reduce data labeling expense. Experiment
results show that the classification rate of passive learning reaches 96.4% and it is robust
to common positions and poses of cell phone. The results of active learning on real data
demonstrate a reduction of labeling labor to achieve comparable performance with
passive learning.
iv
Contents
Certificates…………………………………………………………………………………….i
Acknowledgement………………………………………………………………………........ii
Abstract……………………………………………………………………………………...iii
List of Figures and Tables……………….……………………………………………….......v
Introduction……………………………………………………………………………..........1
Overview……………………………………………………………………………...1
Motivation…………………………………………………………………………………….3
Literature Review…………………………………………………………………………….4
Background………………………………………………………………………….........5-11
Sensing Activity………………………………………………………………………5
Accelerometer………………………………………………………………………...6
Gyroscope…………………………………………………………………………….8
Smartphone as wearable sensors…………………………………..………………..10
Machine Learning…………………………………………………………..………...... 12-13
Supervised Learning…………………………………………………………….......12
Unsupervised Learning………………………………………………………….......12
Semi-supervised Learning……………………………………………….………….12
ReinforcementLearning……………………………………….…………………….13
SVM……………………………………………………………………………………..14-34
Extension to the Multiclass SVM …………………………………………………..15
Performance Evaluation……………………………………………………….…….16
Confusion matrix …………………………………………………………………...17
Feature Generation ……………………………………………………………........18
Sensing and Data Collection …………………………………………………..........19
Feature selection and extraction……………………………………………….........21
Experimental Data Collection………………………………………………….........30
Data Recording…………………………………….………………………………..30
Smartphone Selection……………………………………………………………….30
Up-to-date Smartphones…………………..………………………………………...32
App development……………………………………………………………………34
Conclusion…………………………………………………………………………………..35
Future work………………………………………………………………………….36
References…………………………………………………………………………………...38
v
List of Figures and Tables
Figure 1: Examples of ambient and wearable devices…………………………….………….. 6
Figure 2: Accelerometer(MEMS) Sensor………………………………………….…………. 7
Figure 3: Accelerometer Data Plot………………….………………………………………… 8
Figure 4: Gyroscope……………………………..………………………………..…...……… 9
Figure 5: Scatter Plot (Gyroscope)……….…………………………………………………...10
Figure 6: Commercial Smartphone (SGSII) and some of its features……………………...…11
Figure 7: Example of a binary classification problem……………………….…….………….15
Figure 8: Example of a multiclass problem in a two-dimensional space……………..………17
Figure 9: Confusion Matrix for SVM………………………………………………………....18
Figure 10: Activity Recognition process pipeline…………………………...………………..20
Figure 11: Importing Data………………………………………………….…………………21
Figure 12: Raw Data…………………………………………………………………………..21
Figure13:The Human Activity Recognition Process Pipeline……………………………..….23
Figure 14: ROC Curve (walking)………...……………………………………………….......23
Figure 15: ROC Curve(Standing)……………………………………………………………. 24
Figure 16: ROC Curve(Climbing Stairs)……………………………………………………...24
Figure 17: ROC Curve(Sitting)…………………………………………………………….....25
Figure 18: ROC Curve(Laying)……………………………………………………………....25
Figure 19: Prediction Activity (Standing)…………………………………………………....26
Figure 20: Prediction Activity (Climbing Stairs)………...…………………………………..27
Figure 21: Prediction Activity (Walking)…………………………………………………....28
Figure 22: Prediction Activity (Laying)……………………………………………………..29
Figure 23: Prediction Activity (Sitting)……………………………………………………...30
Table 1: Smartphone sensors with potential use in motion detection………………………..17
Table 2: Features Generation………………………………………………………………....19
Table 3: Smartphone and 9X2 sensor specifications…………………………………………33
1
Introduction
Overview
The demands for understanding human activities have grown in health-care domain,
especially in elder care support, rehabilitation assistance, diabetes, and cognitive disorders.
A huge amount of resources can be saved if sensors can help caretakers record and monitor
the patients all the time and report automatically when any abnormal behavior is detected.
Other applications could be in security in surveillance camera. We can teach cameras to
define a restricted area and mark the objects under focus. These objects could be human or
any bag or so. If a strange bag appears and remains on position for a period then it would
alarm the police or forces.
Many studies have successfully identified activities using wearable sensors with very low
error rate, but the majority of the previous works are done in the laboratories with very
constrained settings. Readings from multiple body-attached sensors achieve low error-rate,
but the complicated setting is not feasible in practice.
This project uses low-cost and commercially available smartphones as sensors to identify
human activities. The growing popularity and computational power of smartphone make it
an ideal candidate for non-intrusive body-attached sensors. According to the statistic of US
mobile subscribers, around 44% of mobile subscribers in 2011 own smartphones and 96%
of these smartphones have built-in inertial sensors such as accelerometer or gyroscope.
Research has shown that gyroscope can help activity recognition even though its
contribution alone is not as good as accelerometer. Because gyroscope is not so easily
accessed in cell-phones as accelerometer, our system only uses readings from a 3-
dimensional accelerometer. Unlike many other works before, we relaxed the constraints of
attaching sensors to fixed body position with fixed device orientation. In our design, the
2
phone can be placed at any position around waist such as jacket pocket and pants pocket,
with arbitrary orientation. These are the most common positions where people carry mobile
phones.
Training process is always required when a new activity is added to the system. Parameters
of the same algorithm may need to be trained and adjusted when the algorithm runs on
different devices due to the variance of sensors. However, labelling a time-series data is a
time consuming process and it is not always possible to request users to label all the training
data. As a result, we propose using active learning technique to accelerate the training
process. Given a classifier, active learning intelligently queries the unlabelled samples and
learns the parameters from the correct labels answered by the oracle, usually human. In
this fashion, users label only the samples that the algorithm asks for and the total amount
of required training samples is reduced. To the best of our knowledge, there is no previous
study on applying active learning to human activity recognition problem.
The goal of this project is to design a light weight and accurate system on smartphone that
can recognize human activities. Moreover, to reduce the labeling time and burden, active
learning models are developed. Through testing and comparing different learning
algorithms, we find one that best fit our system in terms of efficiency and accuracy on a
smartphone.
Human Activity Recognition is a multidisciplinary research field that aims to gather data
regarding people's behavior and their interaction with the environment in order to deliver
valuable context-aware information. It has nowadays contributed to develop human-
centered areas of study such as Ambient Intelligence and Ambient Assisted Living, which
concentrate on the improvement of people's Quality of Life.
3
Motivation
Understanding people's actions and their interaction with the environment is a key element
for the development of the aforementioned intelligent systems. Human Activity
Recognition is a field that specifically deals with this issue through the integration of
sensing and reasoning, in order to deliver context-aware data that can be employed to
provide personalized support in many applications.
As a simple example, imagine a smart home equipped with ambient sensors able to detect
people's presence and the activation of household appliances. It is possible to infer the
activities performed by its residents based on the sensors signals along with other relevant
aspects such as time of the day and date (e.g. a person in the kitchen during morning time
while a coffee machine is on suggests that person is making breakfast). Consequently, the
collected HAR information can be exploited to anticipate future people requirements and
become responsive to them (e.g. by automatically pre-heating the coffee machine,
controlling room lighting and temperature, etc.).
In the HAR framework, there are still several issues that need to be addressed, some of
which are: obtrusiveness of current wearable sensors; lack of fully pervasive systems able
to reach users at any location any time; privacy concerns regarding invasive and continuous
monitoring of activities (e.g. by using video cameras); difficulty of performing HAR in
real-time; battery limitations of wearable devices; and dealing with content extraction from
sparse multi-sensor data.
4
Literature Review
Human activity recognition has been studied for years and researchers have proposed
different solutions to attack the problem. Existing approaches typically use vision sensor,
inertial sensor and the mixture of both. Machine learning and threshold-base algorithms
are often applied. Machine learning usually produces more accurate and reliable results,
while threshold-based algorithms are faster and simpler. One or multiple cameras have
been used to capture and identify body posture. Multiple accelerometers and gyroscopes
attached to different body positions are the most common solutions. Approaches that
combine both vision and inertial sensors have also been purposed. Another essential part
of all these algorithms is data processing. The quality of the input features has a great
impact on the performance. Some previous works are focused on generating the most
useful features from the time series data set. The common approach is to analyse the signal
in both time and frequency domain.
Active learning technique has been applied on many machine learning problems that are
time-consuming and labor-expensive to label samples. Some applications include speech
recognition, information extraction, and handwritten character recognition. This technique,
however, has yet been applied on the human activity problem before Face Recognition
points used by FBI in surveillance
5
Background
Sensing Activity:
The right choice of sensors is one of the first elements to be taken into consideration for
the design of HAR systems. A number of sensors have already been explored to extract
activity-related information. They measure several attributes including vital signs (e.g.
heart rate, body temperature, and blood-pressure), environmental signals (e.g. light
intensity, temperature and sound levels), motion (e.g. acceleration, speed) and positioning
(e.g. global and indoor location).
Figure 1: Examples of ambient and wearable devices
Based on the sensor placement with respect to the user, the sensing mechanisms are
classified in: ambient, when sensors are in static locations of the environment and wearable,
when they are worn or attached to the user's body.
6
Accelerometer
The accelerometer is an instrument that measures the experienced physical acceleration of
an object. It has been employed for several applications in science, medicine, engineering
and industry such as for measuring vibrations in machinery, acceleration in high-speed
vehicles and moving loads on bridges.
Figure 2: Accelerometer(MEMS) Sensor
Its principle of operation generally consists of a seismic mass which is displaced in relation
to the acceleration it is subjected to. The displacement can then be transduced into a
measurable electrical signal. This phenomenon has been applied for the development of
Microelectromechanical Systems (MEMS) sensors. Their technology allows creating
nano-scale devices fabricated with semiconductors. They are advantageous against other
7
sensor technologies because it is possible to produce them in large scale and with low
manufacturing costs. Most common MEMS accelerometers work as a capacitive sensor
composed of a cantilever beam with a proof mass whose deflection is correlated with the
acceleration experienced by the sensor.
Figure 3: Accelerometer Data Plot
8
Gyroscope
A gyroscope is a sensing device for measuring orientation. It has been used in many
applications such as inertial navigation systems, aerial vehicles for stability augmentation
(e.g. in quad copters) and recently, it has been introduced in electronic devices (e.g.
Smartphone, game consoles) for enhancing user interfaces and gaming experience. For
HAR, this sensor has been employed in various applications such as for the detection of
activities (e.g. walking, stair, climbing) and transitions between postures (e.g. from
standing to sitting).
Gyroscopes have also been produced with MEMS technologies. However, sensors of this
type can only measure orientation indirectly. They estimate angular speed instead which
can then be integrated in time in order to obtain orientation. However, it is required first to
Figure 4: Gyroscope
have a reference initial angular position to achieve this. These sensors are also highly prone
to noise which can cause measurement drift from integration.
9
Figure 5: Scatter Plot (Gyroscope)
10
Smartphone as Wearable sensors
Wearable technologies comprises all the devices that are body-worn and allow to gather
and process information from the users and their interaction with the environment. In this
project, we use Smartphone as a wearable device given they are now provided with
numerous internal sensors, some of which can be exploited for motion sensing and are thus
appropriate for the identification of human activities.
Figure 6: Commercial Smartphone (SGSII) and some of its features.
We selected two of them: the accelerometer and the gyroscope. They provide information
about the user's linear acceleration and angular velocity respectively when used as wearable
sensors, and are not highly affected by external factors such as bad indoor signal reception
in GPSs or electromagnetic noise in compasses. However, accelerometers measurements
are always influenced by the gravity component in the detection of the body motion
acceleration. Similarly, we work with the acceleration and angular velocity signals which
are directly read from these sensors and avoid their integration to obtain either position or
11
orientation information given the known drift due to noise usually found in this type of
inertial sensors.
Sensor Measures/Captures Advantages/Disadvantages
Accelerometer Linear acceleration Gravity component is always present in the
Measurements. 3D position estimation requires
double integration therefore it is susceptible
To a large position drift.
Gyroscope Angular Velocity Angle orientation is estimated through integration
of the angular velocity and prone to drift due to
Signal noise.
Camera Images & Video Data extraction from images is computationally
expensive and battery insufficient.
It brings concerns regarding user's privacy.
Compass Magnetic field Measures orientation respect to the magnetic
north but it is susceptible to electromagnetic
noise.
GPS Geo-location Directly measures global 3D positioning but it
does not work indoors due to low signal strength
from satellites.
Barometer
Atmospheric
pressure Can deliver altitude coordinates and helps to rapidly
acquire GPS location. Its precision
is low and can be affected by atmospheric
factors (e.g. air currents and changing weather).
Table 1: Smartphone sensors with potential use in motion detection
12
Machine Learning
Machine learning is the area of study concerned about the design, development and
evaluation of systems capable to learn from data. In many common situations where we
need, for instance, to complete a particular task, or perhaps to make some prediction
regarding a given issue, it is possible to find solutions by the inspection and analysis of
previous observations with similar characteristics to the addressed problem. In other words,
Machine Learning systems are capable of predicting future actions based on past
experiences.
Machine Learning algorithms have been categorized according to the type of input used
for training and its expected outcome.
Supervised Learning
Supervised learning where the algorithm generates a function that maps inputs to desired
outputs. One standard formulation of the supervised learning task is the classification
problem: the learner is required to learn (to approximate the behavior of) a function which
maps a vector into one of several classes by looking at several input-output examples of
the function.
Unsupervised Learning
Unsupervised learning which models a set of inputs: labeled examples are not available.
Semi-supervised Learning
Semi-supervised learning --- which combines both to generate an appropriate function or
classifier.
13
Reinforcement Learning
Reinforcement learning --- where the algorithm learns a policy of how to act given an
observation of the world. Every action has some impact in the environment; the
environment provides feedback that guides the learning algorithm.
14
SVM (Support Vector Machine)
A Support vector machine is one of the most commonly used supervised ML algorithms.
It was initially proposed by Vladimir Vapnik and his colleagues with the aim of solving
linear and non-linear binary classification problems. Afterward, this algorithm has been
adapted for its application in multiclass classification and regression analysis.
The SVM for classification is a deterministic approach that aims to find the hyper planes
that best separate the data into classes. These subspaces are the ones that provide the largest
margin separation from the classes of the training data with the intention of providing a
model with low generalization error for its use with unseen data samples.
SVMs are the basis for the classification of activities in this work. For this reason, we now
introduce them, starting from the binary SVM model which is its simplest representation,
to the extended case that allows the classification of more than two classes: the multiclass
SVM. This algorithm will be further revised throughout the development of this research
to tackle specific requirements for our application in aspects such as kernel type, arithmetic
used and algorithm output type.
Figure 7: Example of a binary classification problem in a two-dimensional space. The line
represents a possible solution to the problem and the dots with red outline are the misclassified
dataset samples.
15
Extension to the Multiclass SVM
It is possible to generalize binary Machine learning models to solve problems with more
than two classes. This process is known as multiclass or multinomial classification. Figure
below shows a simple example of a set of elements from 3 different classes in a space of
two dimensions, each one represented with a different color (red, green and blue). Also,
separating hyper planes are chosen as a possible solution to this problem. There are several
methods that have been previously proposed for solving multiclass problems from binary
formulations. but generally the two most commonly used are: OVA(one-vs-all) and one-
vs-one (OVO).
Their difference relies in the way they compare each class of interest against the remaining
ones: either all together for the first case and one by one for the latter. In this work we use
OVA and take advantage of it because its output directly represents how likely each class
to match a new test sample against the rest is.
The OVA approach consists on constructing a set of m binary SVMs, each one associated
to each existing class c. They are built from positive training samples coming from the
class of interest (labeled as +1) and negative samples which contain the remaining samples
(labeled as −1). Once the SVMs are learned, its is possible to compare them to determine
which class is the most likely to represent a test sample.
The output of the FFP for every class (fc (x)) is either positive or negative and its sign
represents if the new sample is either classified as a given class or not. Ideally, for a given
sample in a multiclass problem, only one of the binary classifiers should be positive.
Therefore, the classification of a new sample can be then formulated as a winner-take-all
arbiter who selects the label c
∗ corresponding to the class with the maximum value of the SVM:
c∗ = arg max c fc (x)
16
Figure 8: Example of a multiclass problem in a two-dimensional space. Three classes represented
by the red, green and blue dots are separated by hyperplanes which provide a possible linear
solution to the classification problem.
Performance Evaluation
The evaluation of Machine learning algorithms is predominantly made through the
statistical analysis of the models using the available experimental data. The most common
method is the confusion matrix which allows representing the algorithm performance by
clearly identifying the types of errors (false positives and negatives) and correctly predicted
samples over the test data. From it, various metrics can also be extracted such as model
accuracy, sensitivity, specificity, precision and F1-Score. In addition, other comparative
qualitative indicators, such as the number of available activities, prediction speed and
memory consumption can support the selection of human activity recognition algorithms.
17
Confusion Matrix
A common method to visualize the performance of an Machine learning algorithm is
through the confusion matrix C, also called contingency table. Assuming there are m
classes available, a typical confusion matrix consists of a squared matrix of size m × m
where misclassifications are visible outside the diagonal.
• True Positives (TP): actual samples of class a correctly predicted as class a
• True Negatives (TN): actual samples of class b correctly predicted as class b
• False Positives (FP): actual samples of class b incorrectly predicted as class a
• False Negatives (FN): actual samples of class a incorrectly predicted as class b
Figure 9: Confusion Matrix for SVM
18
Feature Generation
To collect the acceleration data, each subject carries a Smartphone for a few hours and
performs some activities. In this project, five kinds of common activities are studied,
including walking, limping, jogging, walking upstairs, and walking downstairs. The
position of the phone can be anywhere close to the waist and the orientation is arbitrary.
A low-pass filter with 25Hz cutoff frequency is applied to suppress the noise. Also, due to
the instability of phone sensor, which may drop samples accidentally, interpolation is
applied to fill the gaps.
Time Domain
Variance
Mean
Median
25% Percentile
75% Percentile
Correlation between axis
Average resultant acceleration
Frequency Domain
Energy
Entropy
Centroid Frequency
Peak Frequency
Table 2: Features Generation
To analyze the activities in a short period, we group every 256 sample in a window, which
corresponds to 5.12 sec length of data. The choice of 256, which is a power of two, is a
preferred size when applying Fast Fourier Transformation. For each sample window, 31
features are extracted in both time domain and frequency domain as shown in Table 1.
Except for the average resultant acceleration, all the other features are generated for x, y
and z directions.
19
Sensing and Data Collection
The definition of the experimental set up for data acquisition is also an important aspect in
HAR. Depending on how the subject is observed in its habitat with or without any
manipulation by the observer. Naturalistic environments are ideal for experimentation but
in many cases it is not feasible to exploit them. Therefore, controlled experiments can be
carried out in laboratory conditions aiming to simulate natural settings (semi-naturalistic
environments). Otherwise, fully controlled environments are the last resource for data
acquisition although the performance of the developed method/system with this approach
is uncertain until verified in real situations. Failures in the design of HAR systems can be
due to the lack of real life considerations such as unaccounted activities or target users,
noise, sensor calibration and positioning, etc.
Figure 10: Activity Recognition process pipeline
This latter is for instance highly linked to the system performance as presented in where
different sensor locations were evaluated for determining the ideal positions for performing
HAR through the use of wearable accelerometers. Another final consideration about the
experimentation process is the number of individuals selected as generally larger number
of people involving various age groups and physical conditions are preferred. This is also
20
directly related with the performance and generalization capability of the system in the
presence of new users.
Figure 11: Importing Data
Figure 12: Raw Data
21
Feature Selection and Extraction
In a Machine learning problem, feature selection refers to the process of selecting a
significant set of features to largely impact the discrimination ability of a learning
algorithm. Feature extraction, on the other hand, is an approach to diminish the
dimensionality of an available set of features by performing inter-feature transformations
in order to obtain a new dimensionally reduced representation without largely sacrificing
relevant information from the original set. The curse of dimensionality, which describes
the difficulty in understanding and dealing with high-dimensional data, is certainly linked
with these two reduction mechanisms as they can alleviate the problems that may arise
when working in high-dimensional spaces.
Feature selection and extraction also allows reducing the training times and increasing the
generalization performance in ML problems. They, however, differ on that the
interpretability of models in which feature selection is employed is much clearer. In this
case, features are distinct between each other and not merged such as in feature-extraction-
based approaches. Depending on the application, the features required for the extraction of
relevant information may vary. In the particular case of HAR, a reduced representation of
the sensor data can be used as the input of a recognition algorithm. This is attained by
estimating various measures from the sensor signals in different domains (e.g. in time and
frequency). Nonetheless, other time-frequency function representations such as the wavelet
transforms are also applicable. Once obtained, they can be further reduced using feature
selection (e.g. exhaustive search, or wrappers, filters and embedded methods and extraction
approaches, or a combination of both.
22
Figure 13: The Human Activity Recognition Process Pipeline with its four main blocks
Figure 14: ROC Curve (Walking)
23
Figure 15: ROC Curve(Standing)
Figure 16: ROC Curve (Climbing Stairs)
24
Figure 17: ROC Curve(Sitting)
Figure 18: ROC Curve(Laying)
25
Figure 19: Prediction Activity (Standing)
26
Figure 20: Prediction Activity (Climbing Stairs)
27
Figure 21: Prediction Activity (Walking)
28
Figure 22: Prediction Activity (Laying)
29
Figure 23: Prediction Activity (Sitting)
30
Experimental Data Collection
Data from the accelerometer has the following attributes: time, acceleration along x axis,
acceleration along y axis and acceleration along z axis.
We used Support Vector Machine algorithm to analyze raw data that we got from mobile
device. We used sequential comparing of our activity pattern with training sets, each time
calculating the distances between incoming acceleration data points and points of our
training sets.
Standard classification algorithms cannot be directly applied to raw time-series
accelerometer data. Instead, it must be first transformed into raw time series data into
examples.
Data Recording
MotoLog was the first Android app we developed for capturing and storing data from the
Smartphone inertial sensors and it was created for carrying out the HAR experiments. It
also performs real-time visualization of the inertial signals (accelerometer, gyroscope and
magnetometer) on the Smartphone screen. The app also allows visualizing the experiment
recordings online.
Smartphone Selection
Finding a suitable Smartphone to perform HAR involves the evaluation of the up-to-date
devices in the market, in which hardware, sensors and software are the primary elements
to be considered. Also, other aspects such as price, market of available brands and potential
future distribution of applications were taken into account. However, by the time this
selection process was done (2011), the Smartphone with the required embedded sensors
were very limited, as well as their specifications regarding speed and direct access to the
sensors through their OS. This motivated the assessment of only the available high-end
devices at that time. As it is known with the accelerated growth of current technologies, it
31
was just a matter of two years to discover that now in almost every mid- and high-end
Smartphone in the market the required sensors and hardware with even higher
specifications were available. This finding expands the range of applications of the
developed approaches of this work to a greater group of devices.
32
Up-to-date Smartphone
For the selection of the Smartphone regarding sensor type, we took as a reference point an
already available inertial sensor, the 9X2, which had already demonstrated its applicability
in the detection of various human motor disorders in people with disabilities. It is internally
composed of three triaxialsensors: an accelerometer, a gyroscope and a magnetometer. It
collects inertial sensor data that can be stored on a microSD memory card or wirelessly
transmitted to a terminal via Bluetooth. We expected to find Smartphone with those
minimum characteristics.
Device 9X2 iPhone 4 I9100 Galaxy S II
Brand CETpD Apple Samsung
CPU Microchip dsPIC33 ARM Cortex-A8 ARM Cortex-A9
80MHz 1 GHz 1.2GHz dual-core
ROM Memory 2GB 16GB/32GB 16GB/32GB
RAM Memory 16KB 510MB 1 GB
Operating System N/A iOS 4 Android OS v2.3
Accelerometer STMicroelectronics STMicroelectronics Bosch Sensortec
LIS3LV02DQ LIS331DLH SMB380
Max Frequency 640Hz 1KHz 1.5KHz
Operation Range 2=6g 2=4=8g 2=4=8g
Gyroscope InvenSense STMicroelectronics STMicroelectronics
IDG650, ISZ650 L3G4200D L3G4200D
Max Frequency 140Hz 800Hz 800Hz
Operation Ranges 440=2000 s 1
250=500=2000 s 1
250=500=2000 s 1
Display N/A LED-Backlit IPS TFT Super AMOLED Plus
External Card Yes No Yes
Wireless Bluetooth 3G,WLAN,Bluetooth 3G,WLAN,Bluetooth
Battery Type Li-Ion 1000mAh Li-Po 1420mAh Li-Ion 1650 mAh
Battery Stand-by Up to 18 h Up to 300 h Up to 610 h
Table 3: Smartphone and 9X2 sensor specifications
Table presents a comparison of the 9X2 sensor with two high-end Smartphone candidates.
It includes their characteristics regarding hardware and software. The embedded
accelerometer and gyroscope Smartphone specifications outperform the ones from the
reference inertial sensor regarding speed and operation ranges, making them potentially
suitable for their application in HAR. However, even though the specified frequencies are
33
provided by the devices datasheets, the net frequency of these inertial sensors is limited by
the Smartphone OS which regulates their operation speed and battery consumption. It is
around 100Hz and it can vary depending on the OS load, but it is sufficient to properly
capture human body motion according to. Regarding memory and Central Processing Unit
(CPU) characteristics, both Smartphone share similar characteristics. As a result, the device
selection was consequently subject to additional factors. The selected Smartphone for
carrying out experiments on HAR was the SGSII. It is managed by the Android OS which
is an open source platform with a publicly distributed development environment which
includes a large set of APIs allowing access to the Smartphone hardware and internal
sensors, robust ML tools and its publication policies for apps are relatively simple to fulfill.
Moreover, developing on this OS, also extends the utilization of HAR apps to a wider range
of brands and devices (e.g. tablets, notebooks) which also work under Android OS, unlike
iOS which only operates on devices of only one brand. Figure 4.1 depicts an image of the
selected device.
Android OS has also a leading position in the market which is advantageous because it
can also contribute with an easier distribution of final applications to a larger population
sector. Smartphone market shares in 2013 [Gupta et al., 2013], for example, showed that
the three top mobile OSs were: Android OS, iOS and Windows Phone. They together
make up the vast majority of the phone market reaching a 96%. Android OS controls a
remarkable 79% of the market share on its own.
34
App Development
All the smartphone apps of this work were built using a software solution for Android
development (Android Development Tools (ADT) Bundle) which integrates a collection
of various programs [Android, 2016]:
 Eclipse: is an integrated environment for the development of software projects with
Multilanguage support.
 ADT plug-in: is the toolset for Eclipse designed to allow the development of
Android Apps.
 Android Software Development Kit (SDK): provides the API libraries and
developer tools required to build apps for Android.
 Android Native Development Kit (NDK): is the collection of tools that allows
implementing apps using native-code languages such as C and C++.
 The code was written using two languages, namely, Java and C. The former was
employed for the development of the graphical user interface and app basic
controls. C was reserved for the computationally expensive tasks such as accessing
Smartphone sensors; signal processing, running ML algorithms and storing data.
35
Conclusions
Human activity recognition has broad applications in medical research and human survey
system. In this project, we designed a smartphone-based recognition system that recognizes
five human activities: walking, limping, jogging, going upstairs and going downstairs. The
system collected time series signals using a built-in accelerometer, generated 31 features
in both time and frequency domain, and then reduced the feature dimensionality to improve
the performance. The activity data were trained and tested using 4 passive learning
methods: quadratic classifier, k-nearest neighbor algorithm, support vector machine, and
artificial neural networks.
The best classification rate in our experiment was 84.4%, which is achieved by SVM with
features selected by SFS. Classification performance is robust to the orientation and the
position of smartphones. Besides, active learning algorithms were studied to reduce the
expense of labeling data. Experiment results demonstrated the effectiveness of active
learning in saving labeling labor while achieving comparable performance with passive
learning. Among the four classifiers, KNN and SVM improve most after applying active
learning. The results demonstrate that entropy and distance to the boundary are robust
uncertainty measures when performing queries on KNN and SVM respectively.
Conclusively, SVM is the optimal choice for our problem.
Future work may consider more activities and implement a real-time system on
smartphone. Other query strategies such as variance reduction and density-weighted
methods may be investigated to enhance the performance of active learning schemes
proposed here.
36
Future Work
There is still room for improvement in our work which can be addressed from two different
Perspectives:
i) by solving current limitations of our proposed systems, and
ii) by extending our achievements through complementary and novel applications.
In the first case, some issues have arisen such as the limited number of activities
the system can deal with, the fixed Smartphone
Position on the waist, and the adoption of novel approaches to deal users with distinct
differences in their motion patterns (e.g. people with walking difficulties) into the system.
On the second case, new ideas about how to exploit HAR information in order to provide
new services can be explored. These include the development of context-aware apps for
health and sports monitoring, elderly care and understanding interaction between users
using similar systems. In this section we focus on some of these aspects and propose them
as future research directions:
 For a new user, the performance of the proposed HAR system can be improved if
his motion data is integrated into the learned model. Although this can be done, for
instance, through retraining after following a controlled sequence of activities, this
process can be tedious for the user. But considering that during a normal day it is
possible to gather large amounts of data, we can explore semi-supervised learning
strategies which can allow combining this unlabeled data with already existing
labeled trained data in order to produce considerable improvements in the system
learning accuracy. This can bring advantages to new users, especially those with
particular conditions such as very slow motion or physical disabilities which are
normally difficult to incorporate in the training and the ML algorithm generalization
capability is not sufficient to include them.
 The proposed HAR systems are specific to be used with waist-mounted smart
phones. Although this position allows some degree of variability, it is important to
37
explore if itis also possible to place the device in different body parts such as shirt
or pants pockets and even worn around the arms (e.g. with armbands for running).
In the first case some problems may arise due to the free and continuous motion of
the devices with respect to the body position which can be hard to control (e.g. for
distinguishing between standing and sitting activities). Moreover, now that it’s is
becoming increasingly popular the use of wearable devices such as smart watches
with embedded inertial sensors, it is interesting to investigate how to combine them
with our current Smartphone-based system in order to improve the recognition or to
add new activities that involve upper limbs such as typing, brushing teeth and
writing.
 The outcome of the presented HAR system can be used in higher-level context-
aware applications. For example, when combined with location-based services such
as GPS or home presence sensors for achieving indoor and outdoor activity
detection. Also, if activity information is merged with vital sign sensors, it is
possible develop apps for medical diagnostic and monitoring of ill patients during
their daily life without third-party supervision. Likewise, it is possible to merge
activity information with Smartphone connectivity status such as incoming
calls/messages, in order to control the Smartphone behavior when some specific
activities are occurring, e.g. to avoid receiving phone calls while users are running
or sleeping, and to automatically produce unavailability responses. Later, when
user’s activity has changed, the information can be provided in a timely manner. On
the other hand, we also propose the study of the interaction between two or more
users wearing the HAR recognition system in order to infer collective behavior such
as chatting, dancing, playing sports, etc. In conclusion, there is a wide range of new
possibilities and services that need to be explored where HAR can contribute in their
development.
38
References
[1] Android, 2016. Android developers. http://developer.android.com/index.html. Accessed
03/11/2016
[2] Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge-Luis Reyes-Ortiz.
Human activity recognition on smartphones for mobile context awareness. In Neural Information
Processing Systems. Workshop on Machine Learning Approaches to Mobile Context Awareness,
2012c.
[3] MATLAB Documentation – MathWorks [www.mathworks.com/help/matlab/]
[4] MathWorks – Support [www.mathworks.com/support/]
[5] MATLAB Central – MathWorks [www.mathworks.com/matlabcentral/]
[6] Dataset used
[https://archive.ics.uci.edu/ml/machine-learningdatabases/00240/UCI%20HAR%20Dataset.zip]
[7] Davide Anguita, Xavier Parra, Machine recognition of human activities, June 2015

Más contenido relacionado

La actualidad más candente

Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural NetworksAshray Bhandare
 
Human Pose Estimation by Deep Learning
Human Pose Estimation by Deep LearningHuman Pose Estimation by Deep Learning
Human Pose Estimation by Deep LearningWei Yang
 
Machine learning in image processing
Machine learning in image processingMachine learning in image processing
Machine learning in image processingData Science Thailand
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition pptSantosh Kumar
 
Gesture Recognition Technology-Seminar PPT
Gesture Recognition Technology-Seminar PPTGesture Recognition Technology-Seminar PPT
Gesture Recognition Technology-Seminar PPTSuraj Rai
 
Face recognition attendance system
Face recognition attendance systemFace recognition attendance system
Face recognition attendance systemNaomi Kulkarni
 
Human Activity Recognition in Android
Human Activity Recognition in AndroidHuman Activity Recognition in Android
Human Activity Recognition in AndroidSurbhi Jain
 
Sentiment Analysis of Twitter Data
Sentiment Analysis of Twitter DataSentiment Analysis of Twitter Data
Sentiment Analysis of Twitter DataSumit Raj
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Suraj Aavula
 
Brain tumor detection using image segmentation ppt
Brain tumor detection using image segmentation pptBrain tumor detection using image segmentation ppt
Brain tumor detection using image segmentation pptRoshini Vijayakumar
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural NetworksYogendra Tamang
 
Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Chiranjeevi Adi
 
CYBERBULLYING DETECTION USING MACHINE LEARNING-1 (1).pdf
CYBERBULLYING DETECTION USING              MACHINE LEARNING-1 (1).pdfCYBERBULLYING DETECTION USING              MACHINE LEARNING-1 (1).pdf
CYBERBULLYING DETECTION USING MACHINE LEARNING-1 (1).pdfKumbidiGaming
 
Emotion detection using cnn.pptx
Emotion detection using cnn.pptxEmotion detection using cnn.pptx
Emotion detection using cnn.pptxRADO7900
 
Virtual Mouse using hand gesture recognition
Virtual Mouse using hand gesture recognitionVirtual Mouse using hand gesture recognition
Virtual Mouse using hand gesture recognitionMuktiKalsekar
 
Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognitionRahin Patel
 

La actualidad más candente (20)

Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
Human Pose Estimation by Deep Learning
Human Pose Estimation by Deep LearningHuman Pose Estimation by Deep Learning
Human Pose Estimation by Deep Learning
 
Machine learning in image processing
Machine learning in image processingMachine learning in image processing
Machine learning in image processing
 
Face recognition ppt
Face recognition pptFace recognition ppt
Face recognition ppt
 
Gesture Recognition Technology-Seminar PPT
Gesture Recognition Technology-Seminar PPTGesture Recognition Technology-Seminar PPT
Gesture Recognition Technology-Seminar PPT
 
Human Action Recognition
Human Action RecognitionHuman Action Recognition
Human Action Recognition
 
Face recognition attendance system
Face recognition attendance systemFace recognition attendance system
Face recognition attendance system
 
Application of image processing
Application of image processingApplication of image processing
Application of image processing
 
Human Activity Recognition in Android
Human Activity Recognition in AndroidHuman Activity Recognition in Android
Human Activity Recognition in Android
 
Sentiment Analysis of Twitter Data
Sentiment Analysis of Twitter DataSentiment Analysis of Twitter Data
Sentiment Analysis of Twitter Data
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
 
Brain tumor detection using image segmentation ppt
Brain tumor detection using image segmentation pptBrain tumor detection using image segmentation ppt
Brain tumor detection using image segmentation ppt
 
Hand Gesture Recognition
Hand Gesture RecognitionHand Gesture Recognition
Hand Gesture Recognition
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural Networks
 
Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks
 
Deep learning presentation
Deep learning presentationDeep learning presentation
Deep learning presentation
 
CYBERBULLYING DETECTION USING MACHINE LEARNING-1 (1).pdf
CYBERBULLYING DETECTION USING              MACHINE LEARNING-1 (1).pdfCYBERBULLYING DETECTION USING              MACHINE LEARNING-1 (1).pdf
CYBERBULLYING DETECTION USING MACHINE LEARNING-1 (1).pdf
 
Emotion detection using cnn.pptx
Emotion detection using cnn.pptxEmotion detection using cnn.pptx
Emotion detection using cnn.pptx
 
Virtual Mouse using hand gesture recognition
Virtual Mouse using hand gesture recognitionVirtual Mouse using hand gesture recognition
Virtual Mouse using hand gesture recognition
 
Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognition
 

Similar a Human activity recognition

A Research Base Project Report on A study on physical activity recognition fr...
A Research Base Project Report on A study on physical activity recognition fr...A Research Base Project Report on A study on physical activity recognition fr...
A Research Base Project Report on A study on physical activity recognition fr...Diponkor Bala
 
Object and pose detection
Object and pose detectionObject and pose detection
Object and pose detectionAshwinBicholiya
 
An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...Jessica Navarro
 
An Android Communication Platform between Hearing Impaired and General People
An Android Communication Platform between Hearing Impaired and General PeopleAn Android Communication Platform between Hearing Impaired and General People
An Android Communication Platform between Hearing Impaired and General PeopleAfif Bin Kamrul
 
Smart Snap - Report
Smart Snap - ReportSmart Snap - Report
Smart Snap - ReportTapan Desai
 
IRJET- Design an Approach for Prediction of Human Activity Recognition us...
IRJET-  	  Design an Approach for Prediction of Human Activity Recognition us...IRJET-  	  Design an Approach for Prediction of Human Activity Recognition us...
IRJET- Design an Approach for Prediction of Human Activity Recognition us...IRJET Journal
 
Head_Movement_Visualization
Head_Movement_VisualizationHead_Movement_Visualization
Head_Movement_VisualizationHongfu Huang
 
Multiple object detection report
Multiple object detection reportMultiple object detection report
Multiple object detection reportManish Raghav
 
IRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex SensorsIRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex SensorsIRJET Journal
 
E learning project report (Yashraj Nigam)
E learning project report (Yashraj Nigam)E learning project report (Yashraj Nigam)
E learning project report (Yashraj Nigam)Yashraj Nigam
 
Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...IAEME Publication
 
Seminar report on blue eyes
Seminar report on blue eyesSeminar report on blue eyes
Seminar report on blue eyesRoshmi Sarmah
 
Seminar report on blue eyes
Seminar report on blue eyesSeminar report on blue eyes
Seminar report on blue eyesRoshmi Sarmah
 
Mobile App Analytics
Mobile App AnalyticsMobile App Analytics
Mobile App AnalyticsNynne Silding
 
A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...
A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...
A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...Nischal Lal Shrestha
 

Similar a Human activity recognition (20)

A Research Base Project Report on A study on physical activity recognition fr...
A Research Base Project Report on A study on physical activity recognition fr...A Research Base Project Report on A study on physical activity recognition fr...
A Research Base Project Report on A study on physical activity recognition fr...
 
Seminar report pratik70.pdf
Seminar report pratik70.pdfSeminar report pratik70.pdf
Seminar report pratik70.pdf
 
Object and pose detection
Object and pose detectionObject and pose detection
Object and pose detection
 
Thesis
ThesisThesis
Thesis
 
TSA Eyebox project
TSA Eyebox project TSA Eyebox project
TSA Eyebox project
 
Walentin Widgren Thesis
Walentin Widgren ThesisWalentin Widgren Thesis
Walentin Widgren Thesis
 
An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...An investigation into the physical build and psychological aspects of an inte...
An investigation into the physical build and psychological aspects of an inte...
 
An Android Communication Platform between Hearing Impaired and General People
An Android Communication Platform between Hearing Impaired and General PeopleAn Android Communication Platform between Hearing Impaired and General People
An Android Communication Platform between Hearing Impaired and General People
 
report
reportreport
report
 
Smart Snap - Report
Smart Snap - ReportSmart Snap - Report
Smart Snap - Report
 
IRJET- Design an Approach for Prediction of Human Activity Recognition us...
IRJET-  	  Design an Approach for Prediction of Human Activity Recognition us...IRJET-  	  Design an Approach for Prediction of Human Activity Recognition us...
IRJET- Design an Approach for Prediction of Human Activity Recognition us...
 
Head_Movement_Visualization
Head_Movement_VisualizationHead_Movement_Visualization
Head_Movement_Visualization
 
Multiple object detection report
Multiple object detection reportMultiple object detection report
Multiple object detection report
 
IRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex SensorsIRJET- Human Activity Recognition using Flex Sensors
IRJET- Human Activity Recognition using Flex Sensors
 
E learning project report (Yashraj Nigam)
E learning project report (Yashraj Nigam)E learning project report (Yashraj Nigam)
E learning project report (Yashraj Nigam)
 
Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...Human activity detection based on edge point movements and spatio temporal fe...
Human activity detection based on edge point movements and spatio temporal fe...
 
Seminar report on blue eyes
Seminar report on blue eyesSeminar report on blue eyes
Seminar report on blue eyes
 
Seminar report on blue eyes
Seminar report on blue eyesSeminar report on blue eyes
Seminar report on blue eyes
 
Mobile App Analytics
Mobile App AnalyticsMobile App Analytics
Mobile App Analytics
 
A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...
A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...
A Real-time Classroom Attendance System Utilizing Viola–Jones for Face Detect...
 

Último

Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueBhangaleSonal
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"mphochane1998
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startQuintin Balsdon
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network DevicesChandrakantDivate1
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxJuliansyahHarahap1
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...Amil baba
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.Kamal Acharya
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesRAJNEESHKUMAR341697
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Call Girls Mumbai
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityMorshed Ahmed Rahath
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsvanyagupta248
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdfKamal Acharya
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...drmkjayanthikannan
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 

Último (20)

Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planes
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Integrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - NeometrixIntegrated Test Rig For HTFE-25 - Neometrix
Integrated Test Rig For HTFE-25 - Neometrix
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 

Human activity recognition

  • 1. HUMAN ACTIVITY RECOGNITION USING SMARTPHONE Submitted in partial fulfilment of the requirements for the award of the degree of Bachelor of Technology in Computer Science and Engineering Guide : Submitted By: Kundan Kumar Chandan Anup Kumar Singh : 00220902712 Assistant professor Randhir Kumar Gupta : 03120902712 Comp. Sc. & Engg. Dept. Shubham Jain : 04020902712 Kunal Kumar Gupta : 08620902712 G.B. Pant Govt. Engineering College, New Delhi GGS Indraprastha University, Dwarka, Sector 16C (2012-2016)
  • 2. i Certificate This is to certify that the project report entitled “Human Activity Recognition Using Smartphone” which is submitted by Mr. Anup Kumar Singh, Mr. Randhir Kumar Gupta, Mr. Shubham Jain and Mr. Kunal Kumar Gupta, Roll No. 00220902712, 03120902712, 04020902712 and 08620902712 respectively are an authentic work carried out by them at G.B. Pant Govt. Engineering College under my guidance. The matter embodied in this project work has not been submitted earlier for the award of any degree to the best of my knowledge and belief. Kundan Kumar Chandan Assistant Professor Comp. Sc. & Engg. Department G.B. Pant Govt. Engg. College Okhla Phase-3, New Delhi Dr. Krishna Singh Head of Department Computer Science and Engineering Department G.B. Pant Govt. Engineering College, New Delhi
  • 3. ii Acknowledgement On the submission of our project report on “Human Activity Recognition Using Smartphone”, we would like to extend our gratitude and sincere thanks to our supervisor Mr. Kundan Kumar Chandan, Assistant professor Department of Computer Science and Engineering for his constant motivation and support during the course of our work in the last semester. We truly appreciate and value his esteemed guidance and encouragement from the beginning to the end of this project. We are indebted to his for having helped us shape the problem and providing insights towards the solution. We want to thank all our teachers for providing a solid background for our studies thereafter. They have been great sources of inspiration to us and we thank them from the bottom of our heart. Above all, we would like to thank all our friends whose direct and indirect support helped us complete our project in time. The project would have been impossible without their perpetual moral support. Anup Kumar Singh : 00220902712 (anupksingh95@gmail.com) Randhir Kumar Gupta : 03120902712 (randhirgupta92@gmail.com) Shubham Jain : 04020902712 (shubham1993jain@gmail.com) Kunal Kumar Gupta : 08620902712 (kunalgupta141993@gmail.com)
  • 4. iii Abstract Human activity recognition has wide applications in medical research and security system. In this project, we design a robust activity recognition system based on a Smartphone. The system uses a 3-dimentional Smartphone accelerometer and gyroscope to collect time series signals, from which 31 features are generated in both time and frequency domain. Activities are classified using 4 different passive learning methods, i.e., quadratic classifier, k-nearest neighbor algorithm, support vector machine, and artificial neural networks. We also apply active learning algorithms to reduce data labeling expense. Experiment results show that the classification rate of passive learning reaches 96.4% and it is robust to common positions and poses of cell phone. The results of active learning on real data demonstrate a reduction of labeling labor to achieve comparable performance with passive learning.
  • 5. iv Contents Certificates…………………………………………………………………………………….i Acknowledgement………………………………………………………………………........ii Abstract……………………………………………………………………………………...iii List of Figures and Tables……………….……………………………………………….......v Introduction……………………………………………………………………………..........1 Overview……………………………………………………………………………...1 Motivation…………………………………………………………………………………….3 Literature Review…………………………………………………………………………….4 Background………………………………………………………………………….........5-11 Sensing Activity………………………………………………………………………5 Accelerometer………………………………………………………………………...6 Gyroscope…………………………………………………………………………….8 Smartphone as wearable sensors…………………………………..………………..10 Machine Learning…………………………………………………………..………...... 12-13 Supervised Learning…………………………………………………………….......12 Unsupervised Learning………………………………………………………….......12 Semi-supervised Learning……………………………………………….………….12 ReinforcementLearning……………………………………….…………………….13 SVM……………………………………………………………………………………..14-34 Extension to the Multiclass SVM …………………………………………………..15 Performance Evaluation……………………………………………………….…….16 Confusion matrix …………………………………………………………………...17 Feature Generation ……………………………………………………………........18 Sensing and Data Collection …………………………………………………..........19 Feature selection and extraction……………………………………………….........21 Experimental Data Collection………………………………………………….........30 Data Recording…………………………………….………………………………..30 Smartphone Selection……………………………………………………………….30 Up-to-date Smartphones…………………..………………………………………...32 App development……………………………………………………………………34 Conclusion…………………………………………………………………………………..35 Future work………………………………………………………………………….36 References…………………………………………………………………………………...38
  • 6. v List of Figures and Tables Figure 1: Examples of ambient and wearable devices…………………………….………….. 6 Figure 2: Accelerometer(MEMS) Sensor………………………………………….…………. 7 Figure 3: Accelerometer Data Plot………………….………………………………………… 8 Figure 4: Gyroscope……………………………..………………………………..…...……… 9 Figure 5: Scatter Plot (Gyroscope)……….…………………………………………………...10 Figure 6: Commercial Smartphone (SGSII) and some of its features……………………...…11 Figure 7: Example of a binary classification problem……………………….…….………….15 Figure 8: Example of a multiclass problem in a two-dimensional space……………..………17 Figure 9: Confusion Matrix for SVM………………………………………………………....18 Figure 10: Activity Recognition process pipeline…………………………...………………..20 Figure 11: Importing Data………………………………………………….…………………21 Figure 12: Raw Data…………………………………………………………………………..21 Figure13:The Human Activity Recognition Process Pipeline……………………………..….23 Figure 14: ROC Curve (walking)………...……………………………………………….......23 Figure 15: ROC Curve(Standing)……………………………………………………………. 24 Figure 16: ROC Curve(Climbing Stairs)……………………………………………………...24 Figure 17: ROC Curve(Sitting)…………………………………………………………….....25 Figure 18: ROC Curve(Laying)……………………………………………………………....25 Figure 19: Prediction Activity (Standing)…………………………………………………....26 Figure 20: Prediction Activity (Climbing Stairs)………...…………………………………..27 Figure 21: Prediction Activity (Walking)…………………………………………………....28 Figure 22: Prediction Activity (Laying)……………………………………………………..29 Figure 23: Prediction Activity (Sitting)……………………………………………………...30 Table 1: Smartphone sensors with potential use in motion detection………………………..17 Table 2: Features Generation………………………………………………………………....19 Table 3: Smartphone and 9X2 sensor specifications…………………………………………33
  • 7. 1 Introduction Overview The demands for understanding human activities have grown in health-care domain, especially in elder care support, rehabilitation assistance, diabetes, and cognitive disorders. A huge amount of resources can be saved if sensors can help caretakers record and monitor the patients all the time and report automatically when any abnormal behavior is detected. Other applications could be in security in surveillance camera. We can teach cameras to define a restricted area and mark the objects under focus. These objects could be human or any bag or so. If a strange bag appears and remains on position for a period then it would alarm the police or forces. Many studies have successfully identified activities using wearable sensors with very low error rate, but the majority of the previous works are done in the laboratories with very constrained settings. Readings from multiple body-attached sensors achieve low error-rate, but the complicated setting is not feasible in practice. This project uses low-cost and commercially available smartphones as sensors to identify human activities. The growing popularity and computational power of smartphone make it an ideal candidate for non-intrusive body-attached sensors. According to the statistic of US mobile subscribers, around 44% of mobile subscribers in 2011 own smartphones and 96% of these smartphones have built-in inertial sensors such as accelerometer or gyroscope. Research has shown that gyroscope can help activity recognition even though its contribution alone is not as good as accelerometer. Because gyroscope is not so easily accessed in cell-phones as accelerometer, our system only uses readings from a 3- dimensional accelerometer. Unlike many other works before, we relaxed the constraints of attaching sensors to fixed body position with fixed device orientation. In our design, the
  • 8. 2 phone can be placed at any position around waist such as jacket pocket and pants pocket, with arbitrary orientation. These are the most common positions where people carry mobile phones. Training process is always required when a new activity is added to the system. Parameters of the same algorithm may need to be trained and adjusted when the algorithm runs on different devices due to the variance of sensors. However, labelling a time-series data is a time consuming process and it is not always possible to request users to label all the training data. As a result, we propose using active learning technique to accelerate the training process. Given a classifier, active learning intelligently queries the unlabelled samples and learns the parameters from the correct labels answered by the oracle, usually human. In this fashion, users label only the samples that the algorithm asks for and the total amount of required training samples is reduced. To the best of our knowledge, there is no previous study on applying active learning to human activity recognition problem. The goal of this project is to design a light weight and accurate system on smartphone that can recognize human activities. Moreover, to reduce the labeling time and burden, active learning models are developed. Through testing and comparing different learning algorithms, we find one that best fit our system in terms of efficiency and accuracy on a smartphone. Human Activity Recognition is a multidisciplinary research field that aims to gather data regarding people's behavior and their interaction with the environment in order to deliver valuable context-aware information. It has nowadays contributed to develop human- centered areas of study such as Ambient Intelligence and Ambient Assisted Living, which concentrate on the improvement of people's Quality of Life.
  • 9. 3 Motivation Understanding people's actions and their interaction with the environment is a key element for the development of the aforementioned intelligent systems. Human Activity Recognition is a field that specifically deals with this issue through the integration of sensing and reasoning, in order to deliver context-aware data that can be employed to provide personalized support in many applications. As a simple example, imagine a smart home equipped with ambient sensors able to detect people's presence and the activation of household appliances. It is possible to infer the activities performed by its residents based on the sensors signals along with other relevant aspects such as time of the day and date (e.g. a person in the kitchen during morning time while a coffee machine is on suggests that person is making breakfast). Consequently, the collected HAR information can be exploited to anticipate future people requirements and become responsive to them (e.g. by automatically pre-heating the coffee machine, controlling room lighting and temperature, etc.). In the HAR framework, there are still several issues that need to be addressed, some of which are: obtrusiveness of current wearable sensors; lack of fully pervasive systems able to reach users at any location any time; privacy concerns regarding invasive and continuous monitoring of activities (e.g. by using video cameras); difficulty of performing HAR in real-time; battery limitations of wearable devices; and dealing with content extraction from sparse multi-sensor data.
  • 10. 4 Literature Review Human activity recognition has been studied for years and researchers have proposed different solutions to attack the problem. Existing approaches typically use vision sensor, inertial sensor and the mixture of both. Machine learning and threshold-base algorithms are often applied. Machine learning usually produces more accurate and reliable results, while threshold-based algorithms are faster and simpler. One or multiple cameras have been used to capture and identify body posture. Multiple accelerometers and gyroscopes attached to different body positions are the most common solutions. Approaches that combine both vision and inertial sensors have also been purposed. Another essential part of all these algorithms is data processing. The quality of the input features has a great impact on the performance. Some previous works are focused on generating the most useful features from the time series data set. The common approach is to analyse the signal in both time and frequency domain. Active learning technique has been applied on many machine learning problems that are time-consuming and labor-expensive to label samples. Some applications include speech recognition, information extraction, and handwritten character recognition. This technique, however, has yet been applied on the human activity problem before Face Recognition points used by FBI in surveillance
  • 11. 5 Background Sensing Activity: The right choice of sensors is one of the first elements to be taken into consideration for the design of HAR systems. A number of sensors have already been explored to extract activity-related information. They measure several attributes including vital signs (e.g. heart rate, body temperature, and blood-pressure), environmental signals (e.g. light intensity, temperature and sound levels), motion (e.g. acceleration, speed) and positioning (e.g. global and indoor location). Figure 1: Examples of ambient and wearable devices Based on the sensor placement with respect to the user, the sensing mechanisms are classified in: ambient, when sensors are in static locations of the environment and wearable, when they are worn or attached to the user's body.
  • 12. 6 Accelerometer The accelerometer is an instrument that measures the experienced physical acceleration of an object. It has been employed for several applications in science, medicine, engineering and industry such as for measuring vibrations in machinery, acceleration in high-speed vehicles and moving loads on bridges. Figure 2: Accelerometer(MEMS) Sensor Its principle of operation generally consists of a seismic mass which is displaced in relation to the acceleration it is subjected to. The displacement can then be transduced into a measurable electrical signal. This phenomenon has been applied for the development of Microelectromechanical Systems (MEMS) sensors. Their technology allows creating nano-scale devices fabricated with semiconductors. They are advantageous against other
  • 13. 7 sensor technologies because it is possible to produce them in large scale and with low manufacturing costs. Most common MEMS accelerometers work as a capacitive sensor composed of a cantilever beam with a proof mass whose deflection is correlated with the acceleration experienced by the sensor. Figure 3: Accelerometer Data Plot
  • 14. 8 Gyroscope A gyroscope is a sensing device for measuring orientation. It has been used in many applications such as inertial navigation systems, aerial vehicles for stability augmentation (e.g. in quad copters) and recently, it has been introduced in electronic devices (e.g. Smartphone, game consoles) for enhancing user interfaces and gaming experience. For HAR, this sensor has been employed in various applications such as for the detection of activities (e.g. walking, stair, climbing) and transitions between postures (e.g. from standing to sitting). Gyroscopes have also been produced with MEMS technologies. However, sensors of this type can only measure orientation indirectly. They estimate angular speed instead which can then be integrated in time in order to obtain orientation. However, it is required first to Figure 4: Gyroscope have a reference initial angular position to achieve this. These sensors are also highly prone to noise which can cause measurement drift from integration.
  • 15. 9 Figure 5: Scatter Plot (Gyroscope)
  • 16. 10 Smartphone as Wearable sensors Wearable technologies comprises all the devices that are body-worn and allow to gather and process information from the users and their interaction with the environment. In this project, we use Smartphone as a wearable device given they are now provided with numerous internal sensors, some of which can be exploited for motion sensing and are thus appropriate for the identification of human activities. Figure 6: Commercial Smartphone (SGSII) and some of its features. We selected two of them: the accelerometer and the gyroscope. They provide information about the user's linear acceleration and angular velocity respectively when used as wearable sensors, and are not highly affected by external factors such as bad indoor signal reception in GPSs or electromagnetic noise in compasses. However, accelerometers measurements are always influenced by the gravity component in the detection of the body motion acceleration. Similarly, we work with the acceleration and angular velocity signals which are directly read from these sensors and avoid their integration to obtain either position or
  • 17. 11 orientation information given the known drift due to noise usually found in this type of inertial sensors. Sensor Measures/Captures Advantages/Disadvantages Accelerometer Linear acceleration Gravity component is always present in the Measurements. 3D position estimation requires double integration therefore it is susceptible To a large position drift. Gyroscope Angular Velocity Angle orientation is estimated through integration of the angular velocity and prone to drift due to Signal noise. Camera Images & Video Data extraction from images is computationally expensive and battery insufficient. It brings concerns regarding user's privacy. Compass Magnetic field Measures orientation respect to the magnetic north but it is susceptible to electromagnetic noise. GPS Geo-location Directly measures global 3D positioning but it does not work indoors due to low signal strength from satellites. Barometer Atmospheric pressure Can deliver altitude coordinates and helps to rapidly acquire GPS location. Its precision is low and can be affected by atmospheric factors (e.g. air currents and changing weather). Table 1: Smartphone sensors with potential use in motion detection
  • 18. 12 Machine Learning Machine learning is the area of study concerned about the design, development and evaluation of systems capable to learn from data. In many common situations where we need, for instance, to complete a particular task, or perhaps to make some prediction regarding a given issue, it is possible to find solutions by the inspection and analysis of previous observations with similar characteristics to the addressed problem. In other words, Machine Learning systems are capable of predicting future actions based on past experiences. Machine Learning algorithms have been categorized according to the type of input used for training and its expected outcome. Supervised Learning Supervised learning where the algorithm generates a function that maps inputs to desired outputs. One standard formulation of the supervised learning task is the classification problem: the learner is required to learn (to approximate the behavior of) a function which maps a vector into one of several classes by looking at several input-output examples of the function. Unsupervised Learning Unsupervised learning which models a set of inputs: labeled examples are not available. Semi-supervised Learning Semi-supervised learning --- which combines both to generate an appropriate function or classifier.
  • 19. 13 Reinforcement Learning Reinforcement learning --- where the algorithm learns a policy of how to act given an observation of the world. Every action has some impact in the environment; the environment provides feedback that guides the learning algorithm.
  • 20. 14 SVM (Support Vector Machine) A Support vector machine is one of the most commonly used supervised ML algorithms. It was initially proposed by Vladimir Vapnik and his colleagues with the aim of solving linear and non-linear binary classification problems. Afterward, this algorithm has been adapted for its application in multiclass classification and regression analysis. The SVM for classification is a deterministic approach that aims to find the hyper planes that best separate the data into classes. These subspaces are the ones that provide the largest margin separation from the classes of the training data with the intention of providing a model with low generalization error for its use with unseen data samples. SVMs are the basis for the classification of activities in this work. For this reason, we now introduce them, starting from the binary SVM model which is its simplest representation, to the extended case that allows the classification of more than two classes: the multiclass SVM. This algorithm will be further revised throughout the development of this research to tackle specific requirements for our application in aspects such as kernel type, arithmetic used and algorithm output type. Figure 7: Example of a binary classification problem in a two-dimensional space. The line represents a possible solution to the problem and the dots with red outline are the misclassified dataset samples.
  • 21. 15 Extension to the Multiclass SVM It is possible to generalize binary Machine learning models to solve problems with more than two classes. This process is known as multiclass or multinomial classification. Figure below shows a simple example of a set of elements from 3 different classes in a space of two dimensions, each one represented with a different color (red, green and blue). Also, separating hyper planes are chosen as a possible solution to this problem. There are several methods that have been previously proposed for solving multiclass problems from binary formulations. but generally the two most commonly used are: OVA(one-vs-all) and one- vs-one (OVO). Their difference relies in the way they compare each class of interest against the remaining ones: either all together for the first case and one by one for the latter. In this work we use OVA and take advantage of it because its output directly represents how likely each class to match a new test sample against the rest is. The OVA approach consists on constructing a set of m binary SVMs, each one associated to each existing class c. They are built from positive training samples coming from the class of interest (labeled as +1) and negative samples which contain the remaining samples (labeled as −1). Once the SVMs are learned, its is possible to compare them to determine which class is the most likely to represent a test sample. The output of the FFP for every class (fc (x)) is either positive or negative and its sign represents if the new sample is either classified as a given class or not. Ideally, for a given sample in a multiclass problem, only one of the binary classifiers should be positive. Therefore, the classification of a new sample can be then formulated as a winner-take-all arbiter who selects the label c ∗ corresponding to the class with the maximum value of the SVM: c∗ = arg max c fc (x)
  • 22. 16 Figure 8: Example of a multiclass problem in a two-dimensional space. Three classes represented by the red, green and blue dots are separated by hyperplanes which provide a possible linear solution to the classification problem. Performance Evaluation The evaluation of Machine learning algorithms is predominantly made through the statistical analysis of the models using the available experimental data. The most common method is the confusion matrix which allows representing the algorithm performance by clearly identifying the types of errors (false positives and negatives) and correctly predicted samples over the test data. From it, various metrics can also be extracted such as model accuracy, sensitivity, specificity, precision and F1-Score. In addition, other comparative qualitative indicators, such as the number of available activities, prediction speed and memory consumption can support the selection of human activity recognition algorithms.
  • 23. 17 Confusion Matrix A common method to visualize the performance of an Machine learning algorithm is through the confusion matrix C, also called contingency table. Assuming there are m classes available, a typical confusion matrix consists of a squared matrix of size m × m where misclassifications are visible outside the diagonal. • True Positives (TP): actual samples of class a correctly predicted as class a • True Negatives (TN): actual samples of class b correctly predicted as class b • False Positives (FP): actual samples of class b incorrectly predicted as class a • False Negatives (FN): actual samples of class a incorrectly predicted as class b Figure 9: Confusion Matrix for SVM
  • 24. 18 Feature Generation To collect the acceleration data, each subject carries a Smartphone for a few hours and performs some activities. In this project, five kinds of common activities are studied, including walking, limping, jogging, walking upstairs, and walking downstairs. The position of the phone can be anywhere close to the waist and the orientation is arbitrary. A low-pass filter with 25Hz cutoff frequency is applied to suppress the noise. Also, due to the instability of phone sensor, which may drop samples accidentally, interpolation is applied to fill the gaps. Time Domain Variance Mean Median 25% Percentile 75% Percentile Correlation between axis Average resultant acceleration Frequency Domain Energy Entropy Centroid Frequency Peak Frequency Table 2: Features Generation To analyze the activities in a short period, we group every 256 sample in a window, which corresponds to 5.12 sec length of data. The choice of 256, which is a power of two, is a preferred size when applying Fast Fourier Transformation. For each sample window, 31 features are extracted in both time domain and frequency domain as shown in Table 1. Except for the average resultant acceleration, all the other features are generated for x, y and z directions.
  • 25. 19 Sensing and Data Collection The definition of the experimental set up for data acquisition is also an important aspect in HAR. Depending on how the subject is observed in its habitat with or without any manipulation by the observer. Naturalistic environments are ideal for experimentation but in many cases it is not feasible to exploit them. Therefore, controlled experiments can be carried out in laboratory conditions aiming to simulate natural settings (semi-naturalistic environments). Otherwise, fully controlled environments are the last resource for data acquisition although the performance of the developed method/system with this approach is uncertain until verified in real situations. Failures in the design of HAR systems can be due to the lack of real life considerations such as unaccounted activities or target users, noise, sensor calibration and positioning, etc. Figure 10: Activity Recognition process pipeline This latter is for instance highly linked to the system performance as presented in where different sensor locations were evaluated for determining the ideal positions for performing HAR through the use of wearable accelerometers. Another final consideration about the experimentation process is the number of individuals selected as generally larger number of people involving various age groups and physical conditions are preferred. This is also
  • 26. 20 directly related with the performance and generalization capability of the system in the presence of new users. Figure 11: Importing Data Figure 12: Raw Data
  • 27. 21 Feature Selection and Extraction In a Machine learning problem, feature selection refers to the process of selecting a significant set of features to largely impact the discrimination ability of a learning algorithm. Feature extraction, on the other hand, is an approach to diminish the dimensionality of an available set of features by performing inter-feature transformations in order to obtain a new dimensionally reduced representation without largely sacrificing relevant information from the original set. The curse of dimensionality, which describes the difficulty in understanding and dealing with high-dimensional data, is certainly linked with these two reduction mechanisms as they can alleviate the problems that may arise when working in high-dimensional spaces. Feature selection and extraction also allows reducing the training times and increasing the generalization performance in ML problems. They, however, differ on that the interpretability of models in which feature selection is employed is much clearer. In this case, features are distinct between each other and not merged such as in feature-extraction- based approaches. Depending on the application, the features required for the extraction of relevant information may vary. In the particular case of HAR, a reduced representation of the sensor data can be used as the input of a recognition algorithm. This is attained by estimating various measures from the sensor signals in different domains (e.g. in time and frequency). Nonetheless, other time-frequency function representations such as the wavelet transforms are also applicable. Once obtained, they can be further reduced using feature selection (e.g. exhaustive search, or wrappers, filters and embedded methods and extraction approaches, or a combination of both.
  • 28. 22 Figure 13: The Human Activity Recognition Process Pipeline with its four main blocks Figure 14: ROC Curve (Walking)
  • 29. 23 Figure 15: ROC Curve(Standing) Figure 16: ROC Curve (Climbing Stairs)
  • 30. 24 Figure 17: ROC Curve(Sitting) Figure 18: ROC Curve(Laying)
  • 31. 25 Figure 19: Prediction Activity (Standing)
  • 32. 26 Figure 20: Prediction Activity (Climbing Stairs)
  • 33. 27 Figure 21: Prediction Activity (Walking)
  • 34. 28 Figure 22: Prediction Activity (Laying)
  • 35. 29 Figure 23: Prediction Activity (Sitting)
  • 36. 30 Experimental Data Collection Data from the accelerometer has the following attributes: time, acceleration along x axis, acceleration along y axis and acceleration along z axis. We used Support Vector Machine algorithm to analyze raw data that we got from mobile device. We used sequential comparing of our activity pattern with training sets, each time calculating the distances between incoming acceleration data points and points of our training sets. Standard classification algorithms cannot be directly applied to raw time-series accelerometer data. Instead, it must be first transformed into raw time series data into examples. Data Recording MotoLog was the first Android app we developed for capturing and storing data from the Smartphone inertial sensors and it was created for carrying out the HAR experiments. It also performs real-time visualization of the inertial signals (accelerometer, gyroscope and magnetometer) on the Smartphone screen. The app also allows visualizing the experiment recordings online. Smartphone Selection Finding a suitable Smartphone to perform HAR involves the evaluation of the up-to-date devices in the market, in which hardware, sensors and software are the primary elements to be considered. Also, other aspects such as price, market of available brands and potential future distribution of applications were taken into account. However, by the time this selection process was done (2011), the Smartphone with the required embedded sensors were very limited, as well as their specifications regarding speed and direct access to the sensors through their OS. This motivated the assessment of only the available high-end devices at that time. As it is known with the accelerated growth of current technologies, it
  • 37. 31 was just a matter of two years to discover that now in almost every mid- and high-end Smartphone in the market the required sensors and hardware with even higher specifications were available. This finding expands the range of applications of the developed approaches of this work to a greater group of devices.
  • 38. 32 Up-to-date Smartphone For the selection of the Smartphone regarding sensor type, we took as a reference point an already available inertial sensor, the 9X2, which had already demonstrated its applicability in the detection of various human motor disorders in people with disabilities. It is internally composed of three triaxialsensors: an accelerometer, a gyroscope and a magnetometer. It collects inertial sensor data that can be stored on a microSD memory card or wirelessly transmitted to a terminal via Bluetooth. We expected to find Smartphone with those minimum characteristics. Device 9X2 iPhone 4 I9100 Galaxy S II Brand CETpD Apple Samsung CPU Microchip dsPIC33 ARM Cortex-A8 ARM Cortex-A9 80MHz 1 GHz 1.2GHz dual-core ROM Memory 2GB 16GB/32GB 16GB/32GB RAM Memory 16KB 510MB 1 GB Operating System N/A iOS 4 Android OS v2.3 Accelerometer STMicroelectronics STMicroelectronics Bosch Sensortec LIS3LV02DQ LIS331DLH SMB380 Max Frequency 640Hz 1KHz 1.5KHz Operation Range 2=6g 2=4=8g 2=4=8g Gyroscope InvenSense STMicroelectronics STMicroelectronics IDG650, ISZ650 L3G4200D L3G4200D Max Frequency 140Hz 800Hz 800Hz Operation Ranges 440=2000 s 1 250=500=2000 s 1 250=500=2000 s 1 Display N/A LED-Backlit IPS TFT Super AMOLED Plus External Card Yes No Yes Wireless Bluetooth 3G,WLAN,Bluetooth 3G,WLAN,Bluetooth Battery Type Li-Ion 1000mAh Li-Po 1420mAh Li-Ion 1650 mAh Battery Stand-by Up to 18 h Up to 300 h Up to 610 h Table 3: Smartphone and 9X2 sensor specifications Table presents a comparison of the 9X2 sensor with two high-end Smartphone candidates. It includes their characteristics regarding hardware and software. The embedded accelerometer and gyroscope Smartphone specifications outperform the ones from the reference inertial sensor regarding speed and operation ranges, making them potentially suitable for their application in HAR. However, even though the specified frequencies are
  • 39. 33 provided by the devices datasheets, the net frequency of these inertial sensors is limited by the Smartphone OS which regulates their operation speed and battery consumption. It is around 100Hz and it can vary depending on the OS load, but it is sufficient to properly capture human body motion according to. Regarding memory and Central Processing Unit (CPU) characteristics, both Smartphone share similar characteristics. As a result, the device selection was consequently subject to additional factors. The selected Smartphone for carrying out experiments on HAR was the SGSII. It is managed by the Android OS which is an open source platform with a publicly distributed development environment which includes a large set of APIs allowing access to the Smartphone hardware and internal sensors, robust ML tools and its publication policies for apps are relatively simple to fulfill. Moreover, developing on this OS, also extends the utilization of HAR apps to a wider range of brands and devices (e.g. tablets, notebooks) which also work under Android OS, unlike iOS which only operates on devices of only one brand. Figure 4.1 depicts an image of the selected device. Android OS has also a leading position in the market which is advantageous because it can also contribute with an easier distribution of final applications to a larger population sector. Smartphone market shares in 2013 [Gupta et al., 2013], for example, showed that the three top mobile OSs were: Android OS, iOS and Windows Phone. They together make up the vast majority of the phone market reaching a 96%. Android OS controls a remarkable 79% of the market share on its own.
  • 40. 34 App Development All the smartphone apps of this work were built using a software solution for Android development (Android Development Tools (ADT) Bundle) which integrates a collection of various programs [Android, 2016]:  Eclipse: is an integrated environment for the development of software projects with Multilanguage support.  ADT plug-in: is the toolset for Eclipse designed to allow the development of Android Apps.  Android Software Development Kit (SDK): provides the API libraries and developer tools required to build apps for Android.  Android Native Development Kit (NDK): is the collection of tools that allows implementing apps using native-code languages such as C and C++.  The code was written using two languages, namely, Java and C. The former was employed for the development of the graphical user interface and app basic controls. C was reserved for the computationally expensive tasks such as accessing Smartphone sensors; signal processing, running ML algorithms and storing data.
  • 41. 35 Conclusions Human activity recognition has broad applications in medical research and human survey system. In this project, we designed a smartphone-based recognition system that recognizes five human activities: walking, limping, jogging, going upstairs and going downstairs. The system collected time series signals using a built-in accelerometer, generated 31 features in both time and frequency domain, and then reduced the feature dimensionality to improve the performance. The activity data were trained and tested using 4 passive learning methods: quadratic classifier, k-nearest neighbor algorithm, support vector machine, and artificial neural networks. The best classification rate in our experiment was 84.4%, which is achieved by SVM with features selected by SFS. Classification performance is robust to the orientation and the position of smartphones. Besides, active learning algorithms were studied to reduce the expense of labeling data. Experiment results demonstrated the effectiveness of active learning in saving labeling labor while achieving comparable performance with passive learning. Among the four classifiers, KNN and SVM improve most after applying active learning. The results demonstrate that entropy and distance to the boundary are robust uncertainty measures when performing queries on KNN and SVM respectively. Conclusively, SVM is the optimal choice for our problem. Future work may consider more activities and implement a real-time system on smartphone. Other query strategies such as variance reduction and density-weighted methods may be investigated to enhance the performance of active learning schemes proposed here.
  • 42. 36 Future Work There is still room for improvement in our work which can be addressed from two different Perspectives: i) by solving current limitations of our proposed systems, and ii) by extending our achievements through complementary and novel applications. In the first case, some issues have arisen such as the limited number of activities the system can deal with, the fixed Smartphone Position on the waist, and the adoption of novel approaches to deal users with distinct differences in their motion patterns (e.g. people with walking difficulties) into the system. On the second case, new ideas about how to exploit HAR information in order to provide new services can be explored. These include the development of context-aware apps for health and sports monitoring, elderly care and understanding interaction between users using similar systems. In this section we focus on some of these aspects and propose them as future research directions:  For a new user, the performance of the proposed HAR system can be improved if his motion data is integrated into the learned model. Although this can be done, for instance, through retraining after following a controlled sequence of activities, this process can be tedious for the user. But considering that during a normal day it is possible to gather large amounts of data, we can explore semi-supervised learning strategies which can allow combining this unlabeled data with already existing labeled trained data in order to produce considerable improvements in the system learning accuracy. This can bring advantages to new users, especially those with particular conditions such as very slow motion or physical disabilities which are normally difficult to incorporate in the training and the ML algorithm generalization capability is not sufficient to include them.  The proposed HAR systems are specific to be used with waist-mounted smart phones. Although this position allows some degree of variability, it is important to
  • 43. 37 explore if itis also possible to place the device in different body parts such as shirt or pants pockets and even worn around the arms (e.g. with armbands for running). In the first case some problems may arise due to the free and continuous motion of the devices with respect to the body position which can be hard to control (e.g. for distinguishing between standing and sitting activities). Moreover, now that it’s is becoming increasingly popular the use of wearable devices such as smart watches with embedded inertial sensors, it is interesting to investigate how to combine them with our current Smartphone-based system in order to improve the recognition or to add new activities that involve upper limbs such as typing, brushing teeth and writing.  The outcome of the presented HAR system can be used in higher-level context- aware applications. For example, when combined with location-based services such as GPS or home presence sensors for achieving indoor and outdoor activity detection. Also, if activity information is merged with vital sign sensors, it is possible develop apps for medical diagnostic and monitoring of ill patients during their daily life without third-party supervision. Likewise, it is possible to merge activity information with Smartphone connectivity status such as incoming calls/messages, in order to control the Smartphone behavior when some specific activities are occurring, e.g. to avoid receiving phone calls while users are running or sleeping, and to automatically produce unavailability responses. Later, when user’s activity has changed, the information can be provided in a timely manner. On the other hand, we also propose the study of the interaction between two or more users wearing the HAR recognition system in order to infer collective behavior such as chatting, dancing, playing sports, etc. In conclusion, there is a wide range of new possibilities and services that need to be explored where HAR can contribute in their development.
  • 44. 38 References [1] Android, 2016. Android developers. http://developer.android.com/index.html. Accessed 03/11/2016 [2] Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge-Luis Reyes-Ortiz. Human activity recognition on smartphones for mobile context awareness. In Neural Information Processing Systems. Workshop on Machine Learning Approaches to Mobile Context Awareness, 2012c. [3] MATLAB Documentation – MathWorks [www.mathworks.com/help/matlab/] [4] MathWorks – Support [www.mathworks.com/support/] [5] MATLAB Central – MathWorks [www.mathworks.com/matlabcentral/] [6] Dataset used [https://archive.ics.uci.edu/ml/machine-learningdatabases/00240/UCI%20HAR%20Dataset.zip] [7] Davide Anguita, Xavier Parra, Machine recognition of human activities, June 2015