SlideShare una empresa de Scribd logo
1 de 5
Descargar para leer sin conexión
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org
1960
Abstract- Vision-based human action recognition is the process
of tagging image sequences with action labels. The identification
of movement can be performed at various levels of abstraction.
In existing system, after collecting an preliminary image set for
each action by querying the Web, fit a logistic regression
classifier to distinguish the foreground features of the
correlated action from the background. In the action
recognition process, PbHOG features can be used, which are
more robust to the background clutter and variance of the
domain. Using the initial classifier, incrementally collect more
action images and, at the same time improve the model. Use
nonnegative matrix factorization on this set to find the diverse
pose clusters for that action and train separate local action
classifiers for each cluster of poses. In proposed system it can be
done by event monitoring to discover the most influential
ordered pair for the human specific action. To this end, it make
use of already annotated motion capture datasets and prepare
action segmentation as a weakly supervised temporal clustering
problem for an unknown number of clusters. Use the
annotations to learn a distance metric for skeleton motion using
relative comparisons in the form of samples of the same action
are more similar than they are to a different action. The learned
distance metric is then used to cluster the test sequences. To this
end, we employ a hierarchical Dirichlet process that also
estimates the number of clusters.
Keywords-Action recognition, Heirarchical Dirichlet process
1.INTRODUCTION
Vision-based human action recognition is the
process of tagging images with action labels. Robust solution
to this problem has applications in domains like visual
surveillance, video retrieval and human–computer
interaction.
Manuscript received June, 2013.
Soumya R PG Scholar, Computer Science and Engineering, Coimbatore
Institute of Engineering and Technology., Narasipuram, Coimbatore, Tamil
Nadu, ,India, 9895426268
R.Gnanakumari, Assistant Professor, Computer Science and
Engineering, Coimbatore Institute of Engineering and Technology,
Narasipuram,Coimbatore,TamilNadu,,India.
The task is difficult due to variations in motion performance.
The task of labeling videos containing human motion with
action classes is motivated by many applications both offline
and online. Automatic annotation [8] of video enables more
efficient searching.
Video annotation is the process of adding
interactive commentary to the videos. That is adding
background information about the video Image annotation is
the process by which a computer system automatically
assigns metadata in the form of captionining or keywords to a
digital image.In machine learning, unsupervised learning [8]
refers to the problem of trying to find hidden structure in
unlabeled data. Since the examples given to the learner are
unlabeled, there is no error or reward signal to evaluate a
potential solution .This distinguishes unsupervised learning
from supervised learning. Unsupervised learning is closely
related to the problem of density estimation in statistics.
However unsupervised learning also encompasses many
other techniques that seek to summarize and explain key
features of the data. Many methods employed in
unsupervised learning are based on data mining method used
to preprocess data. Unsupervised learning studies how
systems can learn to represent particular input patterns in a
way that reflects the statistical structure of the overall
collection of patterns.
The queries can be more naturally specified by the
user in case of automatic image annotation. But it is not
possible in content-based image retrieval. In the case of
CBIR, users requires to search the image concepts such as
color and texture.The traditional methods of image retrieval
such as those used by libraries have relied on manually
annotated images, which is expensive and time-consuming,
especially given the large and constantly-growing image
databases in existence.
Action recognition can be increased by proposing
action pose representation from web,but it needs a large
amount of training videos.And it is a challenging
process,because it needs to find out large labeled data that
covers a diverse set of poses.Action recognition in
uncontrolled videos is a difficult task, where it is very tough
to find the large amount of necessary training videos to model
all the variations of the domain. This problem has been
addressed in this paper by proposing a generic method for
action recognition. The idea is to use images collected from
the Web to discover representations of actions and organize
this knowledge to routinely annotate actions in videos. For
this purpose, first use an incremental image retrieval
procedure to collect and clean up the required training set for
constructing the human pose classifiers. The approach is
unsupervised because it require no human interference other
than simply text querying the name of the action to an
Discovering the Most Influential Human
Action Using Web Based Classifier
Soumya R, R.Gnanakumari
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1961
www.ijarcet.org
internet search engine. Its benefit is two- fold: 1) improve
retrieval of action images, 2) collect a large generic database
of action poses, which can then be used in categorization of
videos. And how the Web-based pose classifiers can be
utilized in conjunction with limited labeled videos can be
explored. Ordered pose pairs(OPP) can be used for encoding
the temporal ordering of poses in action model. Temporal
ordering of pose pairs can increase action recognition
accuracy. Selecting the key poses with the help of Web-based
classifiers, the categorization time can be cheap. Our
experiments demonstrate that, with or without avail-able
video data, the pose models learned from the Web can
improve the performance of the action recognition systems.
First is proposing a system which incrementally
collects action images and videos from the Web by simple
text querying. Second is building action models by using the
noisy set of images in an unsupervised fashion in this present
a method for cleaning the results of keyword retrieval, and
learn pose models based on this cleaned dataset. Third is
proposing PbHOG features, to be used in presence of
background clutter that method denoted as an edge
detector.Use the probability of boundary (Pb) operator
(PbCanny), which is to perform delineating the object
boundaries and then used to extract HOG features based on
Pb responses. The action models can be used to re-rank
retrieved images and improve the retrieval precision.
The action models learned from one set of videos
are adapted for recognition in another set of videos using a
transfer topic model. Fourth is using the action pose models
to annotate human actions in uncontrolled videos (e.g.
YouTube videos). The action pose models learnt from the
Web can be used for locating the distinctive poses inside the
videos, and further, improve the action recognition. This key
pose selection scheme also reduces the training time to a
great extent. Fifth is using collected image data from the Web
jointly with video data for improving action recognition.
Sixth is proposing the OPP method for temporal reasoning
about body poses within each action; and using Web-based
pose classifiers for selecting the key poses from human tracks
for efficient training. The proposed OPP descriptor takes one
step further and models the temporal relationships between
poses. By this, actions that share similar intermediate poses
can be more accurately discriminated.
The main contributions are:
• Proposing a system which incrementally collect
action images from the web by simply text querying.
• Building action models by using the noisy set of
images in an unsupervised fashion, and
• Using the models to annotate human actions in
uncontrolled videos, such as YouTube videos.
2. RELATED WORK
Action recognition [3] can be achieved using local
dimensions in terms of spatiotemporal interest points. In
spatial recognition, local features have recently been joint
with SVM in a robust classification approach. In a similar
manner, here, investigate the combination of local space-time
features and SVM and apply the resulting approach to the
recognition of human actions. Typical scenarios include
scenes with cluttered, moving backgrounds, nonstationary
camera, scale variations, individual variations in appearance
and cloth of people, changes in light and view point and so
forth. All of these conditions introduce challenging problems
that have been addressed in computer vision in the past.
Recognizing human action [2] is a key component
in many computer vision applications, such as video
surveillance, human-computer interface, video indexing and
browsing, recognition of gestures, analysis of sports events
and dance choreography. Some of the recent works done in
the area of action recognition have shown that it is useful to
analyze actions by looking at the video sequence as a
space-time intensity volume. Analyzing actions directly in
the space-time volume avoids some limitations of traditional
approaches that involve key frames.
To automatically categorize or localize different
actions [8] in video sequences is very useful for a variety of
tasks, such as video surveillance, object-level video
summarization, video indexing, digital library organization,
etc. However, it remains a challenging task for computers to
achieve robust action recognition due to cluttered
background, camera motion, occlusion, and geometric and
photometric variances of objects. In this paper, present an
algorithm that aims to account for both of these scenarios. A
lot of previous work has been presented to address these
questions. One popular approach is to apply tracked motion
trajectories of body parts to action recognition. This is done
with much human supervision and the robustness of the
algorithm is highly dependent on the tracking system. Ke et
al. apply spatio-temporal volumetric feature that efficiently
scan video sequences in space and time. Another approach is
to use local space-time patches of videos. Laptev et al.
present a space-time interest point detector based on the idea
of the Harris and F ¨ orstner interest point operators.
3.IMAGE REPRESENTATION
For training classifiers, a large amount of data is
needed, such data is collected manually, which is very costly.
The data collected from web are more diverse and less biased
than the home-made datasets; therefore it may be more
sensible for real-world tasks. Collecting useful training
images from the web is difficult due to various challenges.
For a given query, the ratio of non relevant images in the
retrieved dataset is high. And the relevant image set
comprises irregular subsets. For building a consistent training
set, each of the subsets should be recognized and represented
in the last set. Action images means a set of images in which
there is at least one person engaged in a particular action. For
a given query the number of non relevant images will be
high. Sometimes, more than 50% of the images can be
irrelevant. The results of keyword retrieval must be cleaned,
and then learn pose models based on these cleaned dataset.
After collecting the relevant images, the first step is to extract
the location of the human, if no humans are detected in the
image, then that image is discarded. A human detector can be
used for this purpose, which is effective in detecting [10]
people.
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org
1962
Figure1. System Architecture of human action
recognition system
Figure 1 shows the system architecture of human action
recognition system. First collect action images from web
pages by simply text querying the name of the action to a web
search engine. From the images extract the person detector,
and convert the video into frames. Then the action classifier
will classify the actions based on the poses. After identifying
and classifying the actions that can be annotated.
3.1 Image collection from webpages
Collecting useful training image datasets from the
Web can be difficult due to various challenges. First, for a
given keyword- based image search, the ratio of non relevant
images in the retrieved dataset tends to be very high. Second,
the relevant image set mostly comprises discontinuous
subsets, due to different poses, viewpoints and appearances.
In order to build a reliable and effective training set, each of
these subsets should be identified and represented in the final
collected dataset. The number of objects, well as objects’
pose and scale vary quite a bit across retrieved images.
3.2 Person Detection
Within the bounding box the detected humans are
not always centralized. We can solve this issue via an
alignment step based on head area response. Since there is
high variance in the limb areas, head detections are the most
reliable parts of the detector. The head area should be
positioned in the upper center of the bounding box, so for
each image we take the detector’s output for the head and
update the bounding box.
3.3 Feature extraction
once the humans are centralized within the
bounding box, extract an image descriptor for each detected
area. The descriptor is used to provide a good representation
of the poses.For finding the humans from images, Histogram
ofOriented Gradients (HOG) is successful. But the clutter in
the web images makes it difficult to obtain a pose description.
Simple gradient filtering based HOG descriptor is affected by
noisy responses.Probability of boundary (pb)operator can be
used as an edge detector.
3.4 Testing Input
In this using the training videos and testing the input
video using one-vs.-all SVM classifiers over the OPP
descriptors. In the SVM classifier,Hollinger kernel method
can be used, whose feature map can be explicitly computed
by taking the square root of the descriptor values.When video
data is available, it is possible to use this video data to
improve action models that are learned from Web image data.
3.5 Testing Feature Extraction
Web-based classifiers are effective in selecting the
reliable and informative parts of the sequences and use only
those detections for action inference.This selection can lessen
the testing data size and, hence, reduce the computation time
greatly. For this purpose,already trained Web-based pose
classifiers can be used.The selected poses and the associated
local motion information can further be utilized for efficient
action classification.
Figure 2.Shows the output of person detector.
3.6 Action classification using classifier NMF
In this using the training videos, learn one-vs.-all
SVM classifiers over the OPP descriptors. In the SVM
classifier,Hollinger kernel method can be used , whose
feature map can be explicitly computed by taking the square
root of the descriptor values.when video data is available,use
the collected image data together with any available action
videos and find out better classifiers over the combined data.
Another method is to use, the classifiers learned from Web
image data to select the useful parts of the human tracks in
videos to facilitate more effective and efficient recognition.
3.7 Metric Learning From poses for Temporal clustering
of Human Motions
In this using action labels, constraints can be
formulated in terms of similarity and dissimilarity between
triplets of feature vectors. Under such constraints, matrix A
can be learned by employing Information-Theoretic Metric
Action
recognition
Action
pose
models for
annotate
human
actions
Training
process
using
SVM
Video
annotation
Action
classification
Action
classifie
r
Converting
from video
Into frames
Human
action
image
Collect action
images from
web
Person
detectio
n
Feature
Extraction
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
1963
www.ijarcet.org
Learning (ITML). ITML finds a suitable matrix A by
formulating the problem in terms of how similar is A to a
given distance parameterized by A0 (typically, the identity or
the sample covariance). Provided that coming under equation
is a Mahalanobis distance,the problem can be treated as the
similarity of two Gaussian distributions parameterized by A
and A0 respectively. That leads to an information theoretic
objective in terms of the Kullback-Leibler divergence
between both Gaussians. This divergence can be expressed as
a LogDet divergence, thus yielding the following
optimization problem:
(1)
Where Dld is the LogDet divergence, c is the vector of
constraints; x is a vector of slack variables (initialized to c
and constrained to be component-wise non-negative) that
guarantees the existence of a solution and l is a parameter
controlling the tradeoff between satisfying the constraints
and minimizing the similarity between distances.
Figure 3.shows annotated video frames
4. PERFORMANCE COMPARISON
To verify the advantages of the proposed work, their
performance have to be evaluated. The objective of this
section is to compare multiple action with singale action
recognition system.
The dataset for the experiment were synthetic
dataset. For multiple action, actions like sitting. Jumping and
walking were collected. And these were annotated.
Table 1 comparison of Accuracy
Methods Accuracy (%)
Single Action Recognition 85%
Multiple Action
Recognition
93%
5. RESULTS AND DISCUSSIONS
Figure 4 shows the multiple action recognition
system based on the parameter accuracy. It is found that
accuracy of multiple action recognition is higher than single
action recognition system.
Figure 4. Performance Comparison
6. CONCLUSION
In this paper,the videos are collected from the web
and based on the pose the actions are identified and
classified.perforformance evaluation shows that multiple
action recognition system is having high accuracy when
compared to single action system.
7. REFERENCES
1. Adolfo Lopez –Mendez,juragen Gall, Joseph R casas
“Metric Learning From Poses For Temporal
Clustering Of Human Action”.
2. Basri.R , Blank.M, Gorelick.L, Shechtman.E, and
Irani.M, (2005) “Actions as space-time shapes,” In
Proc. ICCV, vol. 2, pp. 1395–140
3. Caputo .B, and Schuldt.C, Laptev.I , (2004)
“Recognizing human actions: A local svm
approach,” in Proc. ICPR, pp. 32–36.
4. Cipolla.R , Kim.T.K, and Wong.S.F,(2007) “Tensor
canonical correlation analysis for action
classification,” presented at the
CVPR,Minneapolis,MN.
5. D.Lee and H.Seung, “ Algorithms for non-negative
matrix factorization”,in
Proc.NIPS,2001,pp.556-562.
6. D.Tran and A.sorokin,”Human activity recognition
with metric learning” in proc
ECCV,2008,pp.548-561.
7. F.Schroff,A.Criminisi and A. zisserman,”Harvesting
image databases from the web”,presented at the
ICCV,Rio de Janeiro,Brazil,2007.
ISSN: 2278 – 1323
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 2, Issue 6, June 2013
www.ijarcet.org
1964
8. Fei-Fei.L,Niebles J.C ,and Wang.H,(2006)
“Unsupervised learning of human action categories
using spatial-temporal words,” in Proc.BMVC, pp.
1249–1258.
Soumya R is currently pursuing M.E
Computer Science and Engineering at
Coimbatore Institute of Engineering and
Technology, Coimbatore, Tamil Nadu,
(Anna University, Chennai). She completed
her B.Tech in Information Technology
from M.E.S College of Engineeering
,Kuttipuram,Kerala,(Calicut University,
Kerala) in 2010.
R.GnanaKumari is currently Assistant
Professor in the Department of Computer
Science, at Coimbatore Institute of
Engineering and Technology, Coimbatore,
Tamil Nadu, (Anna University, Chennai).
She completed her B.E in Computer
Science and Engineering from Sri
Ramakrishna Engineering college,
Coimbatore in 2002 and M.E in Computer
Science and Engineering from Anna
University of Technology in 2011. She has
about 3 years experience in industry and
7.6 years experience in teaching.

Más contenido relacionado

La actualidad más candente

META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVAL
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALMETA-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVAL
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALIJCSEIT Journal
 
IRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A SurveyIRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A SurveyIRJET Journal
 
IRJET- Video Forgery Detection using Machine Learning
IRJET-  	  Video Forgery Detection using Machine LearningIRJET-  	  Video Forgery Detection using Machine Learning
IRJET- Video Forgery Detection using Machine LearningIRJET Journal
 
Vehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTEVehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTEDr. Amarjeet Singh
 
IRJET - Applications of Image and Video Deduplication: A Survey
IRJET -  	  Applications of Image and Video Deduplication: A SurveyIRJET -  	  Applications of Image and Video Deduplication: A Survey
IRJET - Applications of Image and Video Deduplication: A SurveyIRJET Journal
 
IRJET- Recognition of Human Action Interaction using Motion History Image
IRJET-  	  Recognition of Human Action Interaction using Motion History ImageIRJET-  	  Recognition of Human Action Interaction using Motion History Image
IRJET- Recognition of Human Action Interaction using Motion History ImageIRJET Journal
 
Yoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer VisionYoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer VisionDr. Amarjeet Singh
 
A multi-task learning based hybrid prediction algorithm for privacy preservin...
A multi-task learning based hybrid prediction algorithm for privacy preservin...A multi-task learning based hybrid prediction algorithm for privacy preservin...
A multi-task learning based hybrid prediction algorithm for privacy preservin...journalBEEI
 
IRJET- Behavior Analysis from Videos using Motion based Feature Extraction
IRJET-  	  Behavior Analysis from Videos using Motion based Feature ExtractionIRJET-  	  Behavior Analysis from Videos using Motion based Feature Extraction
IRJET- Behavior Analysis from Videos using Motion based Feature ExtractionIRJET Journal
 
Survey on video object detection & tracking
Survey on video object detection & trackingSurvey on video object detection & tracking
Survey on video object detection & trackingijctet
 
IRJET - Direct Me-Nevigation for Blind People
IRJET -  	  Direct Me-Nevigation for Blind PeopleIRJET -  	  Direct Me-Nevigation for Blind People
IRJET - Direct Me-Nevigation for Blind PeopleIRJET Journal
 
Activity recognition using histogram of
Activity recognition using histogram ofActivity recognition using histogram of
Activity recognition using histogram ofijcseit
 
Hardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye MovementsHardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye MovementsKalle
 

La actualidad más candente (16)

META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVAL
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVALMETA-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVAL
META-HEURISTICS BASED ARF OPTIMIZATION FOR IMAGE RETRIEVAL
 
IRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A SurveyIRJET-Real-Time Object Detection: A Survey
IRJET-Real-Time Object Detection: A Survey
 
IRJET- Video Forgery Detection using Machine Learning
IRJET-  	  Video Forgery Detection using Machine LearningIRJET-  	  Video Forgery Detection using Machine Learning
IRJET- Video Forgery Detection using Machine Learning
 
Vehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTEVehicle Monitoring System based On IOT, Using 4G/LTE
Vehicle Monitoring System based On IOT, Using 4G/LTE
 
IRJET - Applications of Image and Video Deduplication: A Survey
IRJET -  	  Applications of Image and Video Deduplication: A SurveyIRJET -  	  Applications of Image and Video Deduplication: A Survey
IRJET - Applications of Image and Video Deduplication: A Survey
 
IRJET- Recognition of Human Action Interaction using Motion History Image
IRJET-  	  Recognition of Human Action Interaction using Motion History ImageIRJET-  	  Recognition of Human Action Interaction using Motion History Image
IRJET- Recognition of Human Action Interaction using Motion History Image
 
Yoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer VisionYoga Posture Classification using Computer Vision
Yoga Posture Classification using Computer Vision
 
Dq4301702706
Dq4301702706Dq4301702706
Dq4301702706
 
A multi-task learning based hybrid prediction algorithm for privacy preservin...
A multi-task learning based hybrid prediction algorithm for privacy preservin...A multi-task learning based hybrid prediction algorithm for privacy preservin...
A multi-task learning based hybrid prediction algorithm for privacy preservin...
 
IRJET- Behavior Analysis from Videos using Motion based Feature Extraction
IRJET-  	  Behavior Analysis from Videos using Motion based Feature ExtractionIRJET-  	  Behavior Analysis from Videos using Motion based Feature Extraction
IRJET- Behavior Analysis from Videos using Motion based Feature Extraction
 
Survey on video object detection & tracking
Survey on video object detection & trackingSurvey on video object detection & tracking
Survey on video object detection & tracking
 
IRJET - Direct Me-Nevigation for Blind People
IRJET -  	  Direct Me-Nevigation for Blind PeopleIRJET -  	  Direct Me-Nevigation for Blind People
IRJET - Direct Me-Nevigation for Blind People
 
Activity recognition using histogram of
Activity recognition using histogram ofActivity recognition using histogram of
Activity recognition using histogram of
 
final ppt
final pptfinal ppt
final ppt
 
K1803027074
K1803027074K1803027074
K1803027074
 
Hardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye MovementsHardoon Image Ranking With Implicit Feedback From Eye Movements
Hardoon Image Ranking With Implicit Feedback From Eye Movements
 

Destacado

NT2015_Infografika_strona_1
NT2015_Infografika_strona_1NT2015_Infografika_strona_1
NT2015_Infografika_strona_1Iwona Janas
 
Morfología del pie de deportistas que practican descalzos
Morfología del pie de deportistas que practican descalzosMorfología del pie de deportistas que practican descalzos
Morfología del pie de deportistas que practican descalzosCelso Sánchez Ramírez
 
Most Active Award LCC_2015.PDF
Most Active Award LCC_2015.PDFMost Active Award LCC_2015.PDF
Most Active Award LCC_2015.PDFTatiana Popa
 
Volume 2-issue-6-1945-1949
Volume 2-issue-6-1945-1949Volume 2-issue-6-1945-1949
Volume 2-issue-6-1945-1949Editor IJARCET
 
Diapositivas khritoll
Diapositivas khritollDiapositivas khritoll
Diapositivas khritollKHAROL
 
ЭПУС Эффективная публичная служба
ЭПУС Эффективная публичная службаЭПУС Эффективная публичная служба
ЭПУС Эффективная публичная службаDmitry Maslov
 

Destacado (10)

Bachelor Degree
Bachelor DegreeBachelor Degree
Bachelor Degree
 
NT2015_Infografika_strona_1
NT2015_Infografika_strona_1NT2015_Infografika_strona_1
NT2015_Infografika_strona_1
 
Morfología del pie de deportistas que practican descalzos
Morfología del pie de deportistas que practican descalzosMorfología del pie de deportistas que practican descalzos
Morfología del pie de deportistas que practican descalzos
 
Most Active Award LCC_2015.PDF
Most Active Award LCC_2015.PDFMost Active Award LCC_2015.PDF
Most Active Award LCC_2015.PDF
 
Volume 2-issue-6-1945-1949
Volume 2-issue-6-1945-1949Volume 2-issue-6-1945-1949
Volume 2-issue-6-1945-1949
 
Diapositivas khritoll
Diapositivas khritollDiapositivas khritoll
Diapositivas khritoll
 
ЭПУС Эффективная публичная служба
ЭПУС Эффективная публичная службаЭПУС Эффективная публичная служба
ЭПУС Эффективная публичная служба
 
1678 1683
1678 16831678 1683
1678 1683
 
1834 1840
1834 18401834 1840
1834 1840
 
Gbi 1 (1)
Gbi 1 (1)Gbi 1 (1)
Gbi 1 (1)
 

Similar a Volume 2-issue-6-1960-1964

Background Subtraction Algorithm Based Human Behavior Detection
Background Subtraction Algorithm Based Human Behavior DetectionBackground Subtraction Algorithm Based Human Behavior Detection
Background Subtraction Algorithm Based Human Behavior DetectionIJERA Editor
 
E03404025032
E03404025032E03404025032
E03404025032theijes
 
IRJET- Automated Attendance System using Face Recognition
IRJET-  	  Automated Attendance System using Face RecognitionIRJET-  	  Automated Attendance System using Face Recognition
IRJET- Automated Attendance System using Face RecognitionIRJET Journal
 
System analysis and design for multimedia retrieval systems
System analysis and design for multimedia retrieval systemsSystem analysis and design for multimedia retrieval systems
System analysis and design for multimedia retrieval systemsijma
 
Face Annotation using Co-Relation based Matching for Improving Image Mining ...
Face Annotation using Co-Relation based Matching  for Improving Image Mining ...Face Annotation using Co-Relation based Matching  for Improving Image Mining ...
Face Annotation using Co-Relation based Matching for Improving Image Mining ...IRJET Journal
 
A Review on Matching For Sketch Technique
A Review on Matching For Sketch TechniqueA Review on Matching For Sketch Technique
A Review on Matching For Sketch TechniqueIOSR Journals
 
IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...
IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...
IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...IRJET Journal
 
Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...butest
 
Automatic Visual Concept Detection in Videos: Review
Automatic Visual Concept Detection in Videos: ReviewAutomatic Visual Concept Detection in Videos: Review
Automatic Visual Concept Detection in Videos: ReviewIRJET Journal
 
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET Journal
 
HUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNING
HUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNINGHUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNING
HUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNINGIRJET Journal
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...cscpconf
 
Human Activity Recognition
Human Activity RecognitionHuman Activity Recognition
Human Activity RecognitionIRJET Journal
 
IRJET- Prediction of Anomalous Activities in a Video
IRJET-  	  Prediction of Anomalous Activities in a VideoIRJET-  	  Prediction of Anomalous Activities in a Video
IRJET- Prediction of Anomalous Activities in a VideoIRJET Journal
 
Automatic video censoring system using deep learning
Automatic video censoring system using deep learningAutomatic video censoring system using deep learning
Automatic video censoring system using deep learningIJECEIAES
 
Secure IoT Systems Monitor Framework using Probabilistic Image Encryption
Secure IoT Systems Monitor Framework using Probabilistic Image EncryptionSecure IoT Systems Monitor Framework using Probabilistic Image Encryption
Secure IoT Systems Monitor Framework using Probabilistic Image EncryptionIJAEMSJORNAL
 
Survey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNNSurvey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNNIRJET Journal
 
Tag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationTag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationeSAT Journals
 
Tag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationTag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationeSAT Publishing House
 

Similar a Volume 2-issue-6-1960-1964 (20)

Background Subtraction Algorithm Based Human Behavior Detection
Background Subtraction Algorithm Based Human Behavior DetectionBackground Subtraction Algorithm Based Human Behavior Detection
Background Subtraction Algorithm Based Human Behavior Detection
 
E03404025032
E03404025032E03404025032
E03404025032
 
IRJET- Automated Attendance System using Face Recognition
IRJET-  	  Automated Attendance System using Face RecognitionIRJET-  	  Automated Attendance System using Face Recognition
IRJET- Automated Attendance System using Face Recognition
 
System analysis and design for multimedia retrieval systems
System analysis and design for multimedia retrieval systemsSystem analysis and design for multimedia retrieval systems
System analysis and design for multimedia retrieval systems
 
Face Annotation using Co-Relation based Matching for Improving Image Mining ...
Face Annotation using Co-Relation based Matching  for Improving Image Mining ...Face Annotation using Co-Relation based Matching  for Improving Image Mining ...
Face Annotation using Co-Relation based Matching for Improving Image Mining ...
 
A Review on Matching For Sketch Technique
A Review on Matching For Sketch TechniqueA Review on Matching For Sketch Technique
A Review on Matching For Sketch Technique
 
IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...
IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...
IRJET- A Survey on the Enhancement of Video Action Recognition using Semi-Sup...
 
Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...Announcing the Final Examination of Mr. Paul Smith for the ...
Announcing the Final Examination of Mr. Paul Smith for the ...
 
Automatic Visual Concept Detection in Videos: Review
Automatic Visual Concept Detection in Videos: ReviewAutomatic Visual Concept Detection in Videos: Review
Automatic Visual Concept Detection in Videos: Review
 
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
 
HUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNING
HUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNINGHUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNING
HUMAN IDENTIFIER WITH MANNERISM USING DEEP LEARNING
 
Csit3916
Csit3916Csit3916
Csit3916
 
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
VIDEO SEGMENTATION FOR MOVING OBJECT DETECTION USING LOCAL CHANGE & ENTROPY B...
 
Human Activity Recognition
Human Activity RecognitionHuman Activity Recognition
Human Activity Recognition
 
IRJET- Prediction of Anomalous Activities in a Video
IRJET-  	  Prediction of Anomalous Activities in a VideoIRJET-  	  Prediction of Anomalous Activities in a Video
IRJET- Prediction of Anomalous Activities in a Video
 
Automatic video censoring system using deep learning
Automatic video censoring system using deep learningAutomatic video censoring system using deep learning
Automatic video censoring system using deep learning
 
Secure IoT Systems Monitor Framework using Probabilistic Image Encryption
Secure IoT Systems Monitor Framework using Probabilistic Image EncryptionSecure IoT Systems Monitor Framework using Probabilistic Image Encryption
Secure IoT Systems Monitor Framework using Probabilistic Image Encryption
 
Survey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNNSurvey on Human Behavior Recognition using CNN
Survey on Human Behavior Recognition using CNN
 
Tag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationTag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotation
 
Tag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotationTag based image retrieval (tbir) using automatic image annotation
Tag based image retrieval (tbir) using automatic image annotation
 

Más de Editor IJARCET

Electrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturizationElectrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturizationEditor IJARCET
 
Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207Editor IJARCET
 
Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199Editor IJARCET
 
Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Editor IJARCET
 
Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194Editor IJARCET
 
Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189Editor IJARCET
 
Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185Editor IJARCET
 
Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176Editor IJARCET
 
Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172Editor IJARCET
 
Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164Editor IJARCET
 
Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158Editor IJARCET
 
Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Editor IJARCET
 
Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147Editor IJARCET
 
Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124Editor IJARCET
 
Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142Editor IJARCET
 
Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138Editor IJARCET
 
Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129Editor IJARCET
 
Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118Editor IJARCET
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Editor IJARCET
 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Editor IJARCET
 

Más de Editor IJARCET (20)

Electrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturizationElectrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturization
 
Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207
 
Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199
 
Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204
 
Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194
 
Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189
 
Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185
 
Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176
 
Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172
 
Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164
 
Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158
 
Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154
 
Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147
 
Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124
 
Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142
 
Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138
 
Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129
 
Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113
 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107
 

Último

The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 

Último (20)

The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 

Volume 2-issue-6-1960-1964

  • 1. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 6, June 2013 www.ijarcet.org 1960 Abstract- Vision-based human action recognition is the process of tagging image sequences with action labels. The identification of movement can be performed at various levels of abstraction. In existing system, after collecting an preliminary image set for each action by querying the Web, fit a logistic regression classifier to distinguish the foreground features of the correlated action from the background. In the action recognition process, PbHOG features can be used, which are more robust to the background clutter and variance of the domain. Using the initial classifier, incrementally collect more action images and, at the same time improve the model. Use nonnegative matrix factorization on this set to find the diverse pose clusters for that action and train separate local action classifiers for each cluster of poses. In proposed system it can be done by event monitoring to discover the most influential ordered pair for the human specific action. To this end, it make use of already annotated motion capture datasets and prepare action segmentation as a weakly supervised temporal clustering problem for an unknown number of clusters. Use the annotations to learn a distance metric for skeleton motion using relative comparisons in the form of samples of the same action are more similar than they are to a different action. The learned distance metric is then used to cluster the test sequences. To this end, we employ a hierarchical Dirichlet process that also estimates the number of clusters. Keywords-Action recognition, Heirarchical Dirichlet process 1.INTRODUCTION Vision-based human action recognition is the process of tagging images with action labels. Robust solution to this problem has applications in domains like visual surveillance, video retrieval and human–computer interaction. Manuscript received June, 2013. Soumya R PG Scholar, Computer Science and Engineering, Coimbatore Institute of Engineering and Technology., Narasipuram, Coimbatore, Tamil Nadu, ,India, 9895426268 R.Gnanakumari, Assistant Professor, Computer Science and Engineering, Coimbatore Institute of Engineering and Technology, Narasipuram,Coimbatore,TamilNadu,,India. The task is difficult due to variations in motion performance. The task of labeling videos containing human motion with action classes is motivated by many applications both offline and online. Automatic annotation [8] of video enables more efficient searching. Video annotation is the process of adding interactive commentary to the videos. That is adding background information about the video Image annotation is the process by which a computer system automatically assigns metadata in the form of captionining or keywords to a digital image.In machine learning, unsupervised learning [8] refers to the problem of trying to find hidden structure in unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution .This distinguishes unsupervised learning from supervised learning. Unsupervised learning is closely related to the problem of density estimation in statistics. However unsupervised learning also encompasses many other techniques that seek to summarize and explain key features of the data. Many methods employed in unsupervised learning are based on data mining method used to preprocess data. Unsupervised learning studies how systems can learn to represent particular input patterns in a way that reflects the statistical structure of the overall collection of patterns. The queries can be more naturally specified by the user in case of automatic image annotation. But it is not possible in content-based image retrieval. In the case of CBIR, users requires to search the image concepts such as color and texture.The traditional methods of image retrieval such as those used by libraries have relied on manually annotated images, which is expensive and time-consuming, especially given the large and constantly-growing image databases in existence. Action recognition can be increased by proposing action pose representation from web,but it needs a large amount of training videos.And it is a challenging process,because it needs to find out large labeled data that covers a diverse set of poses.Action recognition in uncontrolled videos is a difficult task, where it is very tough to find the large amount of necessary training videos to model all the variations of the domain. This problem has been addressed in this paper by proposing a generic method for action recognition. The idea is to use images collected from the Web to discover representations of actions and organize this knowledge to routinely annotate actions in videos. For this purpose, first use an incremental image retrieval procedure to collect and clean up the required training set for constructing the human pose classifiers. The approach is unsupervised because it require no human interference other than simply text querying the name of the action to an Discovering the Most Influential Human Action Using Web Based Classifier Soumya R, R.Gnanakumari
  • 2. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 6, June 2013 1961 www.ijarcet.org internet search engine. Its benefit is two- fold: 1) improve retrieval of action images, 2) collect a large generic database of action poses, which can then be used in categorization of videos. And how the Web-based pose classifiers can be utilized in conjunction with limited labeled videos can be explored. Ordered pose pairs(OPP) can be used for encoding the temporal ordering of poses in action model. Temporal ordering of pose pairs can increase action recognition accuracy. Selecting the key poses with the help of Web-based classifiers, the categorization time can be cheap. Our experiments demonstrate that, with or without avail-able video data, the pose models learned from the Web can improve the performance of the action recognition systems. First is proposing a system which incrementally collects action images and videos from the Web by simple text querying. Second is building action models by using the noisy set of images in an unsupervised fashion in this present a method for cleaning the results of keyword retrieval, and learn pose models based on this cleaned dataset. Third is proposing PbHOG features, to be used in presence of background clutter that method denoted as an edge detector.Use the probability of boundary (Pb) operator (PbCanny), which is to perform delineating the object boundaries and then used to extract HOG features based on Pb responses. The action models can be used to re-rank retrieved images and improve the retrieval precision. The action models learned from one set of videos are adapted for recognition in another set of videos using a transfer topic model. Fourth is using the action pose models to annotate human actions in uncontrolled videos (e.g. YouTube videos). The action pose models learnt from the Web can be used for locating the distinctive poses inside the videos, and further, improve the action recognition. This key pose selection scheme also reduces the training time to a great extent. Fifth is using collected image data from the Web jointly with video data for improving action recognition. Sixth is proposing the OPP method for temporal reasoning about body poses within each action; and using Web-based pose classifiers for selecting the key poses from human tracks for efficient training. The proposed OPP descriptor takes one step further and models the temporal relationships between poses. By this, actions that share similar intermediate poses can be more accurately discriminated. The main contributions are: • Proposing a system which incrementally collect action images from the web by simply text querying. • Building action models by using the noisy set of images in an unsupervised fashion, and • Using the models to annotate human actions in uncontrolled videos, such as YouTube videos. 2. RELATED WORK Action recognition [3] can be achieved using local dimensions in terms of spatiotemporal interest points. In spatial recognition, local features have recently been joint with SVM in a robust classification approach. In a similar manner, here, investigate the combination of local space-time features and SVM and apply the resulting approach to the recognition of human actions. Typical scenarios include scenes with cluttered, moving backgrounds, nonstationary camera, scale variations, individual variations in appearance and cloth of people, changes in light and view point and so forth. All of these conditions introduce challenging problems that have been addressed in computer vision in the past. Recognizing human action [2] is a key component in many computer vision applications, such as video surveillance, human-computer interface, video indexing and browsing, recognition of gestures, analysis of sports events and dance choreography. Some of the recent works done in the area of action recognition have shown that it is useful to analyze actions by looking at the video sequence as a space-time intensity volume. Analyzing actions directly in the space-time volume avoids some limitations of traditional approaches that involve key frames. To automatically categorize or localize different actions [8] in video sequences is very useful for a variety of tasks, such as video surveillance, object-level video summarization, video indexing, digital library organization, etc. However, it remains a challenging task for computers to achieve robust action recognition due to cluttered background, camera motion, occlusion, and geometric and photometric variances of objects. In this paper, present an algorithm that aims to account for both of these scenarios. A lot of previous work has been presented to address these questions. One popular approach is to apply tracked motion trajectories of body parts to action recognition. This is done with much human supervision and the robustness of the algorithm is highly dependent on the tracking system. Ke et al. apply spatio-temporal volumetric feature that efficiently scan video sequences in space and time. Another approach is to use local space-time patches of videos. Laptev et al. present a space-time interest point detector based on the idea of the Harris and F ¨ orstner interest point operators. 3.IMAGE REPRESENTATION For training classifiers, a large amount of data is needed, such data is collected manually, which is very costly. The data collected from web are more diverse and less biased than the home-made datasets; therefore it may be more sensible for real-world tasks. Collecting useful training images from the web is difficult due to various challenges. For a given query, the ratio of non relevant images in the retrieved dataset is high. And the relevant image set comprises irregular subsets. For building a consistent training set, each of the subsets should be recognized and represented in the last set. Action images means a set of images in which there is at least one person engaged in a particular action. For a given query the number of non relevant images will be high. Sometimes, more than 50% of the images can be irrelevant. The results of keyword retrieval must be cleaned, and then learn pose models based on these cleaned dataset. After collecting the relevant images, the first step is to extract the location of the human, if no humans are detected in the image, then that image is discarded. A human detector can be used for this purpose, which is effective in detecting [10] people.
  • 3. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 6, June 2013 www.ijarcet.org 1962 Figure1. System Architecture of human action recognition system Figure 1 shows the system architecture of human action recognition system. First collect action images from web pages by simply text querying the name of the action to a web search engine. From the images extract the person detector, and convert the video into frames. Then the action classifier will classify the actions based on the poses. After identifying and classifying the actions that can be annotated. 3.1 Image collection from webpages Collecting useful training image datasets from the Web can be difficult due to various challenges. First, for a given keyword- based image search, the ratio of non relevant images in the retrieved dataset tends to be very high. Second, the relevant image set mostly comprises discontinuous subsets, due to different poses, viewpoints and appearances. In order to build a reliable and effective training set, each of these subsets should be identified and represented in the final collected dataset. The number of objects, well as objects’ pose and scale vary quite a bit across retrieved images. 3.2 Person Detection Within the bounding box the detected humans are not always centralized. We can solve this issue via an alignment step based on head area response. Since there is high variance in the limb areas, head detections are the most reliable parts of the detector. The head area should be positioned in the upper center of the bounding box, so for each image we take the detector’s output for the head and update the bounding box. 3.3 Feature extraction once the humans are centralized within the bounding box, extract an image descriptor for each detected area. The descriptor is used to provide a good representation of the poses.For finding the humans from images, Histogram ofOriented Gradients (HOG) is successful. But the clutter in the web images makes it difficult to obtain a pose description. Simple gradient filtering based HOG descriptor is affected by noisy responses.Probability of boundary (pb)operator can be used as an edge detector. 3.4 Testing Input In this using the training videos and testing the input video using one-vs.-all SVM classifiers over the OPP descriptors. In the SVM classifier,Hollinger kernel method can be used, whose feature map can be explicitly computed by taking the square root of the descriptor values.When video data is available, it is possible to use this video data to improve action models that are learned from Web image data. 3.5 Testing Feature Extraction Web-based classifiers are effective in selecting the reliable and informative parts of the sequences and use only those detections for action inference.This selection can lessen the testing data size and, hence, reduce the computation time greatly. For this purpose,already trained Web-based pose classifiers can be used.The selected poses and the associated local motion information can further be utilized for efficient action classification. Figure 2.Shows the output of person detector. 3.6 Action classification using classifier NMF In this using the training videos, learn one-vs.-all SVM classifiers over the OPP descriptors. In the SVM classifier,Hollinger kernel method can be used , whose feature map can be explicitly computed by taking the square root of the descriptor values.when video data is available,use the collected image data together with any available action videos and find out better classifiers over the combined data. Another method is to use, the classifiers learned from Web image data to select the useful parts of the human tracks in videos to facilitate more effective and efficient recognition. 3.7 Metric Learning From poses for Temporal clustering of Human Motions In this using action labels, constraints can be formulated in terms of similarity and dissimilarity between triplets of feature vectors. Under such constraints, matrix A can be learned by employing Information-Theoretic Metric Action recognition Action pose models for annotate human actions Training process using SVM Video annotation Action classification Action classifie r Converting from video Into frames Human action image Collect action images from web Person detectio n Feature Extraction
  • 4. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 6, June 2013 1963 www.ijarcet.org Learning (ITML). ITML finds a suitable matrix A by formulating the problem in terms of how similar is A to a given distance parameterized by A0 (typically, the identity or the sample covariance). Provided that coming under equation is a Mahalanobis distance,the problem can be treated as the similarity of two Gaussian distributions parameterized by A and A0 respectively. That leads to an information theoretic objective in terms of the Kullback-Leibler divergence between both Gaussians. This divergence can be expressed as a LogDet divergence, thus yielding the following optimization problem: (1) Where Dld is the LogDet divergence, c is the vector of constraints; x is a vector of slack variables (initialized to c and constrained to be component-wise non-negative) that guarantees the existence of a solution and l is a parameter controlling the tradeoff between satisfying the constraints and minimizing the similarity between distances. Figure 3.shows annotated video frames 4. PERFORMANCE COMPARISON To verify the advantages of the proposed work, their performance have to be evaluated. The objective of this section is to compare multiple action with singale action recognition system. The dataset for the experiment were synthetic dataset. For multiple action, actions like sitting. Jumping and walking were collected. And these were annotated. Table 1 comparison of Accuracy Methods Accuracy (%) Single Action Recognition 85% Multiple Action Recognition 93% 5. RESULTS AND DISCUSSIONS Figure 4 shows the multiple action recognition system based on the parameter accuracy. It is found that accuracy of multiple action recognition is higher than single action recognition system. Figure 4. Performance Comparison 6. CONCLUSION In this paper,the videos are collected from the web and based on the pose the actions are identified and classified.perforformance evaluation shows that multiple action recognition system is having high accuracy when compared to single action system. 7. REFERENCES 1. Adolfo Lopez –Mendez,juragen Gall, Joseph R casas “Metric Learning From Poses For Temporal Clustering Of Human Action”. 2. Basri.R , Blank.M, Gorelick.L, Shechtman.E, and Irani.M, (2005) “Actions as space-time shapes,” In Proc. ICCV, vol. 2, pp. 1395–140 3. Caputo .B, and Schuldt.C, Laptev.I , (2004) “Recognizing human actions: A local svm approach,” in Proc. ICPR, pp. 32–36. 4. Cipolla.R , Kim.T.K, and Wong.S.F,(2007) “Tensor canonical correlation analysis for action classification,” presented at the CVPR,Minneapolis,MN. 5. D.Lee and H.Seung, “ Algorithms for non-negative matrix factorization”,in Proc.NIPS,2001,pp.556-562. 6. D.Tran and A.sorokin,”Human activity recognition with metric learning” in proc ECCV,2008,pp.548-561. 7. F.Schroff,A.Criminisi and A. zisserman,”Harvesting image databases from the web”,presented at the ICCV,Rio de Janeiro,Brazil,2007.
  • 5. ISSN: 2278 – 1323 International Journal of Advanced Research in Computer Engineering & Technology (IJARCET) Volume 2, Issue 6, June 2013 www.ijarcet.org 1964 8. Fei-Fei.L,Niebles J.C ,and Wang.H,(2006) “Unsupervised learning of human action categories using spatial-temporal words,” in Proc.BMVC, pp. 1249–1258. Soumya R is currently pursuing M.E Computer Science and Engineering at Coimbatore Institute of Engineering and Technology, Coimbatore, Tamil Nadu, (Anna University, Chennai). She completed her B.Tech in Information Technology from M.E.S College of Engineeering ,Kuttipuram,Kerala,(Calicut University, Kerala) in 2010. R.GnanaKumari is currently Assistant Professor in the Department of Computer Science, at Coimbatore Institute of Engineering and Technology, Coimbatore, Tamil Nadu, (Anna University, Chennai). She completed her B.E in Computer Science and Engineering from Sri Ramakrishna Engineering college, Coimbatore in 2002 and M.E in Computer Science and Engineering from Anna University of Technology in 2011. She has about 3 years experience in industry and 7.6 years experience in teaching.